00:00:00.000 Started by upstream project "autotest-per-patch" build number 132718 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.185 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.185 The recommended git tool is: git 00:00:00.185 using credential 00000000-0000-0000-0000-000000000002 00:00:00.188 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.226 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.254 Using shallow fetch with depth 1 00:00:00.254 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.254 > git --version # timeout=10 00:00:00.273 > git --version # 'git version 2.39.2' 00:00:00.273 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.285 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.285 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.532 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.544 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.556 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.556 > git config core.sparsecheckout # timeout=10 00:00:07.567 > git read-tree -mu HEAD # timeout=10 00:00:07.584 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.609 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.609 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.711 [Pipeline] Start of Pipeline 00:00:07.726 [Pipeline] library 00:00:07.728 Loading library shm_lib@master 00:00:07.728 Library shm_lib@master is cached. Copying from home. 00:00:07.746 [Pipeline] node 00:00:07.758 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.759 [Pipeline] { 00:00:07.769 [Pipeline] catchError 00:00:07.770 [Pipeline] { 00:00:07.781 [Pipeline] wrap 00:00:07.790 [Pipeline] { 00:00:07.796 [Pipeline] stage 00:00:07.798 [Pipeline] { (Prologue) 00:00:07.813 [Pipeline] echo 00:00:07.814 Node: VM-host-SM38 00:00:07.819 [Pipeline] cleanWs 00:00:07.829 [WS-CLEANUP] Deleting project workspace... 00:00:07.829 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.837 [WS-CLEANUP] done 00:00:08.022 [Pipeline] setCustomBuildProperty 00:00:08.083 [Pipeline] httpRequest 00:00:10.918 [Pipeline] echo 00:00:10.919 Sorcerer 10.211.164.101 is alive 00:00:10.929 [Pipeline] retry 00:00:10.930 [Pipeline] { 00:00:10.941 [Pipeline] httpRequest 00:00:10.945 HttpMethod: GET 00:00:10.946 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.946 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.962 Response Code: HTTP/1.1 200 OK 00:00:10.963 Success: Status code 200 is in the accepted range: 200,404 00:00:10.963 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.534 [Pipeline] } 00:00:19.550 [Pipeline] // retry 00:00:19.557 [Pipeline] sh 00:00:19.842 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.862 [Pipeline] httpRequest 00:00:20.282 [Pipeline] echo 00:00:20.284 Sorcerer 10.211.164.101 is alive 00:00:20.295 [Pipeline] retry 00:00:20.297 [Pipeline] { 00:00:20.312 [Pipeline] httpRequest 00:00:20.317 HttpMethod: GET 00:00:20.318 URL: http://10.211.164.101/packages/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:20.318 Sending request to url: http://10.211.164.101/packages/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:20.337 Response Code: HTTP/1.1 200 OK 00:00:20.338 Success: Status code 200 is in the accepted range: 200,404 00:00:20.338 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:47.922 [Pipeline] } 00:00:47.941 [Pipeline] // retry 00:00:47.949 [Pipeline] sh 00:00:48.235 + tar --no-same-owner -xf spdk_500d7608431001c9b7144808a6c684c47e67d513.tar.gz 00:00:51.558 [Pipeline] sh 00:00:51.841 + git -C spdk log --oneline -n5 00:00:51.841 500d76084 nvmf: added support for add/delete host wrt referral 00:00:51.841 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:00:51.841 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:00:51.841 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:00:51.841 e2dfdf06c accel/mlx5: Register post_poller handler 00:00:51.863 [Pipeline] writeFile 00:00:51.881 [Pipeline] sh 00:00:52.167 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.180 [Pipeline] sh 00:00:52.468 + cat autorun-spdk.conf 00:00:52.468 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.468 SPDK_TEST_NVME=1 00:00:52.468 SPDK_TEST_FTL=1 00:00:52.468 SPDK_TEST_ISAL=1 00:00:52.468 SPDK_RUN_ASAN=1 00:00:52.468 SPDK_RUN_UBSAN=1 00:00:52.468 SPDK_TEST_XNVME=1 00:00:52.468 SPDK_TEST_NVME_FDP=1 00:00:52.468 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.476 RUN_NIGHTLY=0 00:00:52.478 [Pipeline] } 00:00:52.493 [Pipeline] // stage 00:00:52.510 [Pipeline] stage 00:00:52.512 [Pipeline] { (Run VM) 00:00:52.525 [Pipeline] sh 00:00:52.881 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.881 + echo 'Start stage prepare_nvme.sh' 00:00:52.881 Start stage prepare_nvme.sh 00:00:52.881 + [[ -n 9 ]] 00:00:52.881 + disk_prefix=ex9 00:00:52.881 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:52.881 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:52.881 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:52.881 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.881 ++ SPDK_TEST_NVME=1 00:00:52.881 ++ SPDK_TEST_FTL=1 00:00:52.881 ++ SPDK_TEST_ISAL=1 00:00:52.881 ++ SPDK_RUN_ASAN=1 00:00:52.881 ++ SPDK_RUN_UBSAN=1 00:00:52.881 ++ SPDK_TEST_XNVME=1 00:00:52.881 ++ SPDK_TEST_NVME_FDP=1 00:00:52.881 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.881 ++ RUN_NIGHTLY=0 00:00:52.881 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:52.881 + nvme_files=() 00:00:52.881 + declare -A nvme_files 00:00:52.881 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.881 + nvme_files['nvme.img']=5G 00:00:52.881 + nvme_files['nvme-cmb.img']=5G 00:00:52.881 + nvme_files['nvme-multi0.img']=4G 00:00:52.881 + nvme_files['nvme-multi1.img']=4G 00:00:52.881 + nvme_files['nvme-multi2.img']=4G 00:00:52.881 + nvme_files['nvme-openstack.img']=8G 00:00:52.881 + nvme_files['nvme-zns.img']=5G 00:00:52.881 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.881 + (( SPDK_TEST_FTL == 1 )) 00:00:52.881 + nvme_files["nvme-ftl.img"]=6G 00:00:52.881 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.881 + nvme_files["nvme-fdp.img"]=1G 00:00:52.881 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.881 + for nvme in "${!nvme_files[@]}" 00:00:52.881 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:00:52.881 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.881 + for nvme in "${!nvme_files[@]}" 00:00:52.881 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:00:53.455 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:53.455 + for nvme in "${!nvme_files[@]}" 00:00:53.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:00:53.455 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.455 + for nvme in "${!nvme_files[@]}" 00:00:53.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:00:53.455 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.455 + for nvme in "${!nvme_files[@]}" 00:00:53.455 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:00:53.716 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.716 + for nvme in "${!nvme_files[@]}" 00:00:53.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:00:53.716 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.716 + for nvme in "${!nvme_files[@]}" 00:00:53.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:00:53.716 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.716 + for nvme in "${!nvme_files[@]}" 00:00:53.716 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:00:53.977 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:53.977 + for nvme in "${!nvme_files[@]}" 00:00:53.977 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:00:54.552 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.552 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:00:54.552 + echo 'End stage prepare_nvme.sh' 00:00:54.552 End stage prepare_nvme.sh 00:00:54.566 [Pipeline] sh 00:00:54.852 + DISTRO=fedora39 00:00:54.852 + CPUS=10 00:00:54.852 + RAM=12288 00:00:54.852 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.852 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:54.852 00:00:54.852 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:54.852 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:54.852 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:54.852 HELP=0 00:00:54.852 DRY_RUN=0 00:00:54.852 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:00:54.852 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:54.852 NVME_AUTO_CREATE=0 00:00:54.852 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:00:54.852 NVME_CMB=,,,, 00:00:54.852 NVME_PMR=,,,, 00:00:54.852 NVME_ZNS=,,,, 00:00:54.852 NVME_MS=true,,,, 00:00:54.852 NVME_FDP=,,,on, 00:00:54.852 SPDK_VAGRANT_DISTRO=fedora39 00:00:54.852 SPDK_VAGRANT_VMCPU=10 00:00:54.852 SPDK_VAGRANT_VMRAM=12288 00:00:54.852 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.852 SPDK_VAGRANT_HTTP_PROXY= 00:00:54.852 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.852 SPDK_OPENSTACK_NETWORK=0 00:00:54.852 VAGRANT_PACKAGE_BOX=0 00:00:54.852 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.852 FORCE_DISTRO=true 00:00:54.852 VAGRANT_BOX_VERSION= 00:00:54.852 EXTRA_VAGRANTFILES= 00:00:54.852 NIC_MODEL=e1000 00:00:54.852 00:00:54.852 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:54.852 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:57.393 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.653 ==> default: Creating image (snapshot of base box volume). 00:00:58.223 ==> default: Creating domain with the following settings... 00:00:58.223 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733479203_35a705f176245309a984 00:00:58.223 ==> default: -- Domain type: kvm 00:00:58.223 ==> default: -- Cpus: 10 00:00:58.223 ==> default: -- Feature: acpi 00:00:58.223 ==> default: -- Feature: apic 00:00:58.223 ==> default: -- Feature: pae 00:00:58.223 ==> default: -- Memory: 12288M 00:00:58.223 ==> default: -- Memory Backing: hugepages: 00:00:58.223 ==> default: -- Management MAC: 00:00:58.223 ==> default: -- Loader: 00:00:58.223 ==> default: -- Nvram: 00:00:58.223 ==> default: -- Base box: spdk/fedora39 00:00:58.223 ==> default: -- Storage pool: default 00:00:58.223 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733479203_35a705f176245309a984.img (20G) 00:00:58.223 ==> default: -- Volume Cache: default 00:00:58.223 ==> default: -- Kernel: 00:00:58.223 ==> default: -- Initrd: 00:00:58.223 ==> default: -- Graphics Type: vnc 00:00:58.223 ==> default: -- Graphics Port: -1 00:00:58.223 ==> default: -- Graphics IP: 127.0.0.1 00:00:58.223 ==> default: -- Graphics Password: Not defined 00:00:58.223 ==> default: -- Video Type: cirrus 00:00:58.223 ==> default: -- Video VRAM: 9216 00:00:58.223 ==> default: -- Sound Type: 00:00:58.223 ==> default: -- Keymap: en-us 00:00:58.223 ==> default: -- TPM Path: 00:00:58.223 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:58.223 ==> default: -- Command line args: 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:58.223 ==> default: -> value=-drive, 00:00:58.223 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:58.223 ==> default: -> value=-device, 00:00:58.223 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.223 ==> default: Creating shared folders metadata... 00:00:58.223 ==> default: Starting domain. 00:01:00.132 ==> default: Waiting for domain to get an IP address... 00:01:15.065 ==> default: Waiting for SSH to become available... 00:01:16.447 ==> default: Configuring and enabling network interfaces... 00:01:20.648 default: SSH address: 192.168.121.90:22 00:01:20.648 default: SSH username: vagrant 00:01:20.648 default: SSH auth method: private key 00:01:23.255 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:31.399 ==> default: Mounting SSHFS shared folder... 00:01:33.308 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:33.308 ==> default: Checking Mount.. 00:01:34.252 ==> default: Folder Successfully Mounted! 00:01:34.509 00:01:34.509 SUCCESS! 00:01:34.509 00:01:34.509 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:34.509 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:34.509 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:34.509 00:01:34.517 [Pipeline] } 00:01:34.531 [Pipeline] // stage 00:01:34.539 [Pipeline] dir 00:01:34.540 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:34.541 [Pipeline] { 00:01:34.553 [Pipeline] catchError 00:01:34.554 [Pipeline] { 00:01:34.565 [Pipeline] sh 00:01:34.849 + vagrant ssh-config --host vagrant 00:01:34.849 + sed -ne '/^Host/,$p' 00:01:34.849 + tee ssh_conf 00:01:38.142 Host vagrant 00:01:38.142 HostName 192.168.121.90 00:01:38.142 User vagrant 00:01:38.142 Port 22 00:01:38.142 UserKnownHostsFile /dev/null 00:01:38.142 StrictHostKeyChecking no 00:01:38.142 PasswordAuthentication no 00:01:38.142 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:38.142 IdentitiesOnly yes 00:01:38.142 LogLevel FATAL 00:01:38.142 ForwardAgent yes 00:01:38.142 ForwardX11 yes 00:01:38.142 00:01:38.155 [Pipeline] withEnv 00:01:38.158 [Pipeline] { 00:01:38.172 [Pipeline] sh 00:01:38.451 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:38.451 source /etc/os-release 00:01:38.451 [[ -e /image.version ]] && img=$(< /image.version) 00:01:38.451 # Minimal, systemd-like check. 00:01:38.451 if [[ -e /.dockerenv ]]; then 00:01:38.451 # Clear garbage from the node'\''s name: 00:01:38.451 # agt-er_autotest_547-896 -> autotest_547-896 00:01:38.451 # $HOSTNAME is the actual container id 00:01:38.451 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:38.451 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:38.451 # We can assume this is a mount from a host where container is running, 00:01:38.451 # so fetch its hostname to easily identify the target swarm worker. 00:01:38.451 container="$(< /etc/hostname) ($agent)" 00:01:38.451 else 00:01:38.451 # Fallback 00:01:38.451 container=$agent 00:01:38.451 fi 00:01:38.451 fi 00:01:38.451 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:38.451 ' 00:01:38.720 [Pipeline] } 00:01:38.736 [Pipeline] // withEnv 00:01:38.745 [Pipeline] setCustomBuildProperty 00:01:38.760 [Pipeline] stage 00:01:38.762 [Pipeline] { (Tests) 00:01:38.779 [Pipeline] sh 00:01:39.078 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:39.092 [Pipeline] sh 00:01:39.363 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:39.639 [Pipeline] timeout 00:01:39.639 Timeout set to expire in 50 min 00:01:39.641 [Pipeline] { 00:01:39.655 [Pipeline] sh 00:01:39.940 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:40.513 HEAD is now at 500d76084 nvmf: added support for add/delete host wrt referral 00:01:40.525 [Pipeline] sh 00:01:40.811 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:41.171 [Pipeline] sh 00:01:41.458 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:41.737 [Pipeline] sh 00:01:42.024 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:42.287 ++ readlink -f spdk_repo 00:01:42.287 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:42.287 + [[ -n /home/vagrant/spdk_repo ]] 00:01:42.287 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:42.287 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:42.287 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:42.287 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:42.287 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:42.287 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:42.287 + cd /home/vagrant/spdk_repo 00:01:42.287 + source /etc/os-release 00:01:42.287 ++ NAME='Fedora Linux' 00:01:42.287 ++ VERSION='39 (Cloud Edition)' 00:01:42.287 ++ ID=fedora 00:01:42.287 ++ VERSION_ID=39 00:01:42.287 ++ VERSION_CODENAME= 00:01:42.287 ++ PLATFORM_ID=platform:f39 00:01:42.287 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:42.287 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:42.287 ++ LOGO=fedora-logo-icon 00:01:42.287 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:42.287 ++ HOME_URL=https://fedoraproject.org/ 00:01:42.287 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:42.287 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:42.287 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:42.287 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:42.287 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:42.287 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:42.287 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:42.287 ++ SUPPORT_END=2024-11-12 00:01:42.287 ++ VARIANT='Cloud Edition' 00:01:42.287 ++ VARIANT_ID=cloud 00:01:42.287 + uname -a 00:01:42.287 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:42.287 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:42.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:42.810 Hugepages 00:01:42.810 node hugesize free / total 00:01:43.072 node0 1048576kB 0 / 0 00:01:43.072 node0 2048kB 0 / 0 00:01:43.072 00:01:43.072 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.072 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:43.072 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:43.072 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:43.072 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:43.072 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:43.072 + rm -f /tmp/spdk-ld-path 00:01:43.072 + source autorun-spdk.conf 00:01:43.072 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.072 ++ SPDK_TEST_NVME=1 00:01:43.072 ++ SPDK_TEST_FTL=1 00:01:43.072 ++ SPDK_TEST_ISAL=1 00:01:43.072 ++ SPDK_RUN_ASAN=1 00:01:43.072 ++ SPDK_RUN_UBSAN=1 00:01:43.072 ++ SPDK_TEST_XNVME=1 00:01:43.072 ++ SPDK_TEST_NVME_FDP=1 00:01:43.072 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.072 ++ RUN_NIGHTLY=0 00:01:43.072 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.072 + [[ -n '' ]] 00:01:43.072 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:43.072 + for M in /var/spdk/build-*-manifest.txt 00:01:43.072 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:43.072 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.072 + for M in /var/spdk/build-*-manifest.txt 00:01:43.072 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.072 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.072 + for M in /var/spdk/build-*-manifest.txt 00:01:43.072 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.072 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.072 ++ uname 00:01:43.072 + [[ Linux == \L\i\n\u\x ]] 00:01:43.072 + sudo dmesg -T 00:01:43.072 + sudo dmesg --clear 00:01:43.072 + dmesg_pid=5033 00:01:43.072 + [[ Fedora Linux == FreeBSD ]] 00:01:43.072 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.072 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.072 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.072 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.072 + sudo dmesg -Tw 00:01:43.072 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.072 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.072 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.072 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.072 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.072 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.072 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.072 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.072 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.072 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.072 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:43.335 10:00:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:43.335 10:00:49 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.335 10:00:49 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:43.335 10:00:49 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:43.335 10:00:49 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:43.335 10:00:49 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:43.335 10:00:49 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:43.335 10:00:49 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:43.335 10:00:49 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.335 10:00:49 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.335 10:00:49 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.335 10:00:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.335 10:00:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.335 10:00:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.335 10:00:49 -- paths/export.sh@5 -- $ export PATH 00:01:43.335 10:00:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.335 10:00:49 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:43.335 10:00:49 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:43.335 10:00:49 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733479249.XXXXXX 00:01:43.335 10:00:49 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733479249.Hc5pQk 00:01:43.335 10:00:49 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:43.335 10:00:49 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:43.335 10:00:49 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:43.335 10:00:49 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:43.336 10:00:49 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.336 10:00:49 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:43.336 10:00:49 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:43.336 10:00:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.336 10:00:49 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:43.336 10:00:49 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:43.336 10:00:49 -- pm/common@17 -- $ local monitor 00:01:43.336 10:00:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.336 10:00:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.336 10:00:49 -- pm/common@25 -- $ sleep 1 00:01:43.336 10:00:49 -- pm/common@21 -- $ date +%s 00:01:43.336 10:00:49 -- pm/common@21 -- $ date +%s 00:01:43.336 10:00:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733479249 00:01:43.336 10:00:49 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733479249 00:01:43.336 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733479249_collect-vmstat.pm.log 00:01:43.336 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733479249_collect-cpu-load.pm.log 00:01:44.277 10:00:50 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:44.277 10:00:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.277 10:00:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.277 10:00:50 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:44.277 10:00:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.278 Fri Dec 6 10:00:50 AM UTC 2024 00:01:44.278 10:00:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.278 v25.01-pre-304-g500d76084 00:01:44.278 10:00:50 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:44.278 10:00:50 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:44.278 10:00:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.278 10:00:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.278 10:00:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.278 ************************************ 00:01:44.278 START TEST asan 00:01:44.278 ************************************ 00:01:44.278 using asan 00:01:44.278 10:00:50 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:44.278 00:01:44.278 real 0m0.000s 00:01:44.278 user 0m0.000s 00:01:44.278 sys 0m0.000s 00:01:44.278 10:00:50 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.278 10:00:50 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.278 ************************************ 00:01:44.278 END TEST asan 00:01:44.278 ************************************ 00:01:44.539 10:00:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.539 10:00:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.540 10:00:50 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.540 10:00:50 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.540 10:00:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.540 ************************************ 00:01:44.540 START TEST ubsan 00:01:44.540 ************************************ 00:01:44.540 using ubsan 00:01:44.540 10:00:50 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:44.540 00:01:44.540 real 0m0.000s 00:01:44.540 user 0m0.000s 00:01:44.540 sys 0m0.000s 00:01:44.540 ************************************ 00:01:44.540 END TEST ubsan 00:01:44.540 ************************************ 00:01:44.540 10:00:50 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:44.540 10:00:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.540 10:00:50 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:44.540 10:00:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:44.540 10:00:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:44.540 10:00:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:44.540 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:44.540 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:45.113 Using 'verbs' RDMA provider 00:01:58.324 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:10.573 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:10.573 Creating mk/config.mk...done. 00:02:10.573 Creating mk/cc.flags.mk...done. 00:02:10.573 Type 'make' to build. 00:02:10.573 10:01:15 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:10.573 10:01:15 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.573 10:01:15 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.573 10:01:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.573 ************************************ 00:02:10.573 START TEST make 00:02:10.573 ************************************ 00:02:10.573 10:01:15 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:10.573 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:10.573 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:10.573 meson setup builddir \ 00:02:10.573 -Dwith-libaio=enabled \ 00:02:10.573 -Dwith-liburing=enabled \ 00:02:10.573 -Dwith-libvfn=disabled \ 00:02:10.573 -Dwith-spdk=disabled \ 00:02:10.573 -Dexamples=false \ 00:02:10.573 -Dtests=false \ 00:02:10.573 -Dtools=false && \ 00:02:10.573 meson compile -C builddir && \ 00:02:10.573 cd -) 00:02:10.573 make[1]: Nothing to be done for 'all'. 00:02:11.504 The Meson build system 00:02:11.504 Version: 1.5.0 00:02:11.504 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:11.504 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:11.504 Build type: native build 00:02:11.504 Project name: xnvme 00:02:11.504 Project version: 0.7.5 00:02:11.504 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.504 C linker for the host machine: cc ld.bfd 2.40-14 00:02:11.504 Host machine cpu family: x86_64 00:02:11.504 Host machine cpu: x86_64 00:02:11.504 Message: host_machine.system: linux 00:02:11.504 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:11.504 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:11.504 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:11.504 Run-time dependency threads found: YES 00:02:11.504 Has header "setupapi.h" : NO 00:02:11.504 Has header "linux/blkzoned.h" : YES 00:02:11.504 Has header "linux/blkzoned.h" : YES (cached) 00:02:11.504 Has header "libaio.h" : YES 00:02:11.504 Library aio found: YES 00:02:11.504 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.504 Run-time dependency liburing found: YES 2.2 00:02:11.504 Dependency libvfn skipped: feature with-libvfn disabled 00:02:11.504 Found CMake: /usr/bin/cmake (3.27.7) 00:02:11.504 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:11.504 Subproject spdk : skipped: feature with-spdk disabled 00:02:11.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:11.504 Run-time dependency appleframeworks found: NO (tried framework) 00:02:11.504 Library rt found: YES 00:02:11.504 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:11.505 Configuring xnvme_config.h using configuration 00:02:11.505 Configuring xnvme.spec using configuration 00:02:11.505 Run-time dependency bash-completion found: YES 2.11 00:02:11.505 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:11.505 Program cp found: YES (/usr/bin/cp) 00:02:11.505 Build targets in project: 3 00:02:11.505 00:02:11.505 xnvme 0.7.5 00:02:11.505 00:02:11.505 Subprojects 00:02:11.505 spdk : NO Feature 'with-spdk' disabled 00:02:11.505 00:02:11.505 User defined options 00:02:11.505 examples : false 00:02:11.505 tests : false 00:02:11.505 tools : false 00:02:11.505 with-libaio : enabled 00:02:11.505 with-liburing: enabled 00:02:11.505 with-libvfn : disabled 00:02:11.505 with-spdk : disabled 00:02:11.505 00:02:11.505 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.068 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:12.068 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:12.068 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:12.068 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:12.068 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:12.068 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:12.068 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.068 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:12.068 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.068 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.068 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:12.069 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:12.069 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:12.069 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:12.325 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:12.325 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:12.325 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:12.325 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:12.325 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:12.325 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:12.325 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:12.325 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:12.325 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:12.325 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:12.325 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:12.325 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:12.325 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:12.325 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:12.325 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:12.325 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:12.325 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:12.325 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:12.325 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:12.325 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:12.325 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:12.325 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:12.325 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:12.325 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:12.325 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:12.325 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:12.325 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:12.325 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:12.325 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:12.325 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:12.325 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:12.325 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:12.325 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:12.325 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:12.325 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:12.325 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:12.325 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:12.584 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:12.584 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:12.584 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:12.584 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:12.584 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:12.584 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:12.584 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:12.584 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:12.584 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:12.584 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:12.584 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:12.584 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:12.584 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:12.584 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:12.584 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:12.584 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:12.584 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:12.584 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:12.584 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:12.855 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:12.855 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:12.855 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:12.855 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:13.113 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:13.113 [75/76] Linking static target lib/libxnvme.a 00:02:13.113 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:13.113 INFO: autodetecting backend as ninja 00:02:13.113 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:13.371 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:21.482 The Meson build system 00:02:21.482 Version: 1.5.0 00:02:21.482 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:21.482 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:21.482 Build type: native build 00:02:21.482 Program cat found: YES (/usr/bin/cat) 00:02:21.482 Project name: DPDK 00:02:21.482 Project version: 24.03.0 00:02:21.482 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.482 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.482 Host machine cpu family: x86_64 00:02:21.482 Host machine cpu: x86_64 00:02:21.482 Message: ## Building in Developer Mode ## 00:02:21.482 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.482 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.482 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.482 Program python3 found: YES (/usr/bin/python3) 00:02:21.482 Program cat found: YES (/usr/bin/cat) 00:02:21.482 Compiler for C supports arguments -march=native: YES 00:02:21.482 Checking for size of "void *" : 8 00:02:21.482 Checking for size of "void *" : 8 (cached) 00:02:21.482 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.482 Library m found: YES 00:02:21.482 Library numa found: YES 00:02:21.482 Has header "numaif.h" : YES 00:02:21.482 Library fdt found: NO 00:02:21.482 Library execinfo found: NO 00:02:21.482 Has header "execinfo.h" : YES 00:02:21.482 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.482 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.482 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.482 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.482 Run-time dependency openssl found: YES 3.1.1 00:02:21.482 Run-time dependency libpcap found: YES 1.10.4 00:02:21.482 Has header "pcap.h" with dependency libpcap: YES 00:02:21.482 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.482 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.482 Compiler for C supports arguments -Wformat: YES 00:02:21.482 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.482 Compiler for C supports arguments -Wformat-security: NO 00:02:21.482 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.482 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.482 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.482 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.482 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.482 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.482 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.482 Compiler for C supports arguments -Wundef: YES 00:02:21.482 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.482 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.482 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.482 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.482 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.482 Program objdump found: YES (/usr/bin/objdump) 00:02:21.482 Compiler for C supports arguments -mavx512f: YES 00:02:21.482 Checking if "AVX512 checking" compiles: YES 00:02:21.482 Fetching value of define "__SSE4_2__" : 1 00:02:21.482 Fetching value of define "__AES__" : 1 00:02:21.482 Fetching value of define "__AVX__" : 1 00:02:21.482 Fetching value of define "__AVX2__" : 1 00:02:21.482 Fetching value of define "__AVX512BW__" : 1 00:02:21.482 Fetching value of define "__AVX512CD__" : 1 00:02:21.482 Fetching value of define "__AVX512DQ__" : 1 00:02:21.482 Fetching value of define "__AVX512F__" : 1 00:02:21.482 Fetching value of define "__AVX512VL__" : 1 00:02:21.482 Fetching value of define "__PCLMUL__" : 1 00:02:21.482 Fetching value of define "__RDRND__" : 1 00:02:21.482 Fetching value of define "__RDSEED__" : 1 00:02:21.482 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:21.482 Fetching value of define "__znver1__" : (undefined) 00:02:21.482 Fetching value of define "__znver2__" : (undefined) 00:02:21.482 Fetching value of define "__znver3__" : (undefined) 00:02:21.482 Fetching value of define "__znver4__" : (undefined) 00:02:21.482 Library asan found: YES 00:02:21.482 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.482 Message: lib/log: Defining dependency "log" 00:02:21.482 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.482 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.482 Library rt found: YES 00:02:21.482 Checking for function "getentropy" : NO 00:02:21.482 Message: lib/eal: Defining dependency "eal" 00:02:21.482 Message: lib/ring: Defining dependency "ring" 00:02:21.482 Message: lib/rcu: Defining dependency "rcu" 00:02:21.482 Message: lib/mempool: Defining dependency "mempool" 00:02:21.482 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.482 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.482 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.482 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.482 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.482 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.482 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:21.482 Compiler for C supports arguments -mpclmul: YES 00:02:21.482 Compiler for C supports arguments -maes: YES 00:02:21.482 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.482 Compiler for C supports arguments -mavx512bw: YES 00:02:21.483 Compiler for C supports arguments -mavx512dq: YES 00:02:21.483 Compiler for C supports arguments -mavx512vl: YES 00:02:21.483 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.483 Compiler for C supports arguments -mavx2: YES 00:02:21.483 Compiler for C supports arguments -mavx: YES 00:02:21.483 Message: lib/net: Defining dependency "net" 00:02:21.483 Message: lib/meter: Defining dependency "meter" 00:02:21.483 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.483 Message: lib/pci: Defining dependency "pci" 00:02:21.483 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.483 Message: lib/hash: Defining dependency "hash" 00:02:21.483 Message: lib/timer: Defining dependency "timer" 00:02:21.483 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.483 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.483 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.483 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.483 Message: lib/power: Defining dependency "power" 00:02:21.483 Message: lib/reorder: Defining dependency "reorder" 00:02:21.483 Message: lib/security: Defining dependency "security" 00:02:21.483 Has header "linux/userfaultfd.h" : YES 00:02:21.483 Has header "linux/vduse.h" : YES 00:02:21.483 Message: lib/vhost: Defining dependency "vhost" 00:02:21.483 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.483 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.483 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.483 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.483 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.483 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.483 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.483 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.483 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.483 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.483 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.483 Configuring doxy-api-html.conf using configuration 00:02:21.483 Configuring doxy-api-man.conf using configuration 00:02:21.483 Program mandb found: YES (/usr/bin/mandb) 00:02:21.483 Program sphinx-build found: NO 00:02:21.483 Configuring rte_build_config.h using configuration 00:02:21.483 Message: 00:02:21.483 ================= 00:02:21.483 Applications Enabled 00:02:21.483 ================= 00:02:21.483 00:02:21.483 apps: 00:02:21.483 00:02:21.483 00:02:21.483 Message: 00:02:21.483 ================= 00:02:21.483 Libraries Enabled 00:02:21.483 ================= 00:02:21.483 00:02:21.483 libs: 00:02:21.483 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.483 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.483 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.483 00:02:21.483 Message: 00:02:21.483 =============== 00:02:21.483 Drivers Enabled 00:02:21.483 =============== 00:02:21.483 00:02:21.483 common: 00:02:21.483 00:02:21.483 bus: 00:02:21.483 pci, vdev, 00:02:21.483 mempool: 00:02:21.483 ring, 00:02:21.483 dma: 00:02:21.483 00:02:21.483 net: 00:02:21.483 00:02:21.483 crypto: 00:02:21.483 00:02:21.483 compress: 00:02:21.483 00:02:21.483 vdpa: 00:02:21.483 00:02:21.483 00:02:21.483 Message: 00:02:21.483 ================= 00:02:21.483 Content Skipped 00:02:21.483 ================= 00:02:21.483 00:02:21.483 apps: 00:02:21.483 dumpcap: explicitly disabled via build config 00:02:21.483 graph: explicitly disabled via build config 00:02:21.483 pdump: explicitly disabled via build config 00:02:21.483 proc-info: explicitly disabled via build config 00:02:21.483 test-acl: explicitly disabled via build config 00:02:21.483 test-bbdev: explicitly disabled via build config 00:02:21.483 test-cmdline: explicitly disabled via build config 00:02:21.483 test-compress-perf: explicitly disabled via build config 00:02:21.483 test-crypto-perf: explicitly disabled via build config 00:02:21.483 test-dma-perf: explicitly disabled via build config 00:02:21.483 test-eventdev: explicitly disabled via build config 00:02:21.483 test-fib: explicitly disabled via build config 00:02:21.483 test-flow-perf: explicitly disabled via build config 00:02:21.483 test-gpudev: explicitly disabled via build config 00:02:21.483 test-mldev: explicitly disabled via build config 00:02:21.483 test-pipeline: explicitly disabled via build config 00:02:21.483 test-pmd: explicitly disabled via build config 00:02:21.483 test-regex: explicitly disabled via build config 00:02:21.483 test-sad: explicitly disabled via build config 00:02:21.483 test-security-perf: explicitly disabled via build config 00:02:21.483 00:02:21.483 libs: 00:02:21.483 argparse: explicitly disabled via build config 00:02:21.483 metrics: explicitly disabled via build config 00:02:21.483 acl: explicitly disabled via build config 00:02:21.483 bbdev: explicitly disabled via build config 00:02:21.483 bitratestats: explicitly disabled via build config 00:02:21.483 bpf: explicitly disabled via build config 00:02:21.483 cfgfile: explicitly disabled via build config 00:02:21.483 distributor: explicitly disabled via build config 00:02:21.483 efd: explicitly disabled via build config 00:02:21.483 eventdev: explicitly disabled via build config 00:02:21.483 dispatcher: explicitly disabled via build config 00:02:21.483 gpudev: explicitly disabled via build config 00:02:21.483 gro: explicitly disabled via build config 00:02:21.483 gso: explicitly disabled via build config 00:02:21.483 ip_frag: explicitly disabled via build config 00:02:21.483 jobstats: explicitly disabled via build config 00:02:21.483 latencystats: explicitly disabled via build config 00:02:21.483 lpm: explicitly disabled via build config 00:02:21.483 member: explicitly disabled via build config 00:02:21.483 pcapng: explicitly disabled via build config 00:02:21.483 rawdev: explicitly disabled via build config 00:02:21.483 regexdev: explicitly disabled via build config 00:02:21.483 mldev: explicitly disabled via build config 00:02:21.483 rib: explicitly disabled via build config 00:02:21.483 sched: explicitly disabled via build config 00:02:21.483 stack: explicitly disabled via build config 00:02:21.483 ipsec: explicitly disabled via build config 00:02:21.483 pdcp: explicitly disabled via build config 00:02:21.483 fib: explicitly disabled via build config 00:02:21.483 port: explicitly disabled via build config 00:02:21.483 pdump: explicitly disabled via build config 00:02:21.483 table: explicitly disabled via build config 00:02:21.483 pipeline: explicitly disabled via build config 00:02:21.483 graph: explicitly disabled via build config 00:02:21.483 node: explicitly disabled via build config 00:02:21.483 00:02:21.483 drivers: 00:02:21.483 common/cpt: not in enabled drivers build config 00:02:21.483 common/dpaax: not in enabled drivers build config 00:02:21.483 common/iavf: not in enabled drivers build config 00:02:21.483 common/idpf: not in enabled drivers build config 00:02:21.483 common/ionic: not in enabled drivers build config 00:02:21.483 common/mvep: not in enabled drivers build config 00:02:21.483 common/octeontx: not in enabled drivers build config 00:02:21.483 bus/auxiliary: not in enabled drivers build config 00:02:21.483 bus/cdx: not in enabled drivers build config 00:02:21.483 bus/dpaa: not in enabled drivers build config 00:02:21.483 bus/fslmc: not in enabled drivers build config 00:02:21.483 bus/ifpga: not in enabled drivers build config 00:02:21.483 bus/platform: not in enabled drivers build config 00:02:21.483 bus/uacce: not in enabled drivers build config 00:02:21.483 bus/vmbus: not in enabled drivers build config 00:02:21.483 common/cnxk: not in enabled drivers build config 00:02:21.483 common/mlx5: not in enabled drivers build config 00:02:21.483 common/nfp: not in enabled drivers build config 00:02:21.483 common/nitrox: not in enabled drivers build config 00:02:21.483 common/qat: not in enabled drivers build config 00:02:21.483 common/sfc_efx: not in enabled drivers build config 00:02:21.483 mempool/bucket: not in enabled drivers build config 00:02:21.483 mempool/cnxk: not in enabled drivers build config 00:02:21.483 mempool/dpaa: not in enabled drivers build config 00:02:21.483 mempool/dpaa2: not in enabled drivers build config 00:02:21.483 mempool/octeontx: not in enabled drivers build config 00:02:21.483 mempool/stack: not in enabled drivers build config 00:02:21.483 dma/cnxk: not in enabled drivers build config 00:02:21.483 dma/dpaa: not in enabled drivers build config 00:02:21.483 dma/dpaa2: not in enabled drivers build config 00:02:21.483 dma/hisilicon: not in enabled drivers build config 00:02:21.483 dma/idxd: not in enabled drivers build config 00:02:21.483 dma/ioat: not in enabled drivers build config 00:02:21.483 dma/skeleton: not in enabled drivers build config 00:02:21.483 net/af_packet: not in enabled drivers build config 00:02:21.483 net/af_xdp: not in enabled drivers build config 00:02:21.483 net/ark: not in enabled drivers build config 00:02:21.483 net/atlantic: not in enabled drivers build config 00:02:21.483 net/avp: not in enabled drivers build config 00:02:21.483 net/axgbe: not in enabled drivers build config 00:02:21.483 net/bnx2x: not in enabled drivers build config 00:02:21.483 net/bnxt: not in enabled drivers build config 00:02:21.483 net/bonding: not in enabled drivers build config 00:02:21.483 net/cnxk: not in enabled drivers build config 00:02:21.483 net/cpfl: not in enabled drivers build config 00:02:21.483 net/cxgbe: not in enabled drivers build config 00:02:21.483 net/dpaa: not in enabled drivers build config 00:02:21.483 net/dpaa2: not in enabled drivers build config 00:02:21.483 net/e1000: not in enabled drivers build config 00:02:21.483 net/ena: not in enabled drivers build config 00:02:21.483 net/enetc: not in enabled drivers build config 00:02:21.483 net/enetfec: not in enabled drivers build config 00:02:21.483 net/enic: not in enabled drivers build config 00:02:21.483 net/failsafe: not in enabled drivers build config 00:02:21.483 net/fm10k: not in enabled drivers build config 00:02:21.483 net/gve: not in enabled drivers build config 00:02:21.483 net/hinic: not in enabled drivers build config 00:02:21.483 net/hns3: not in enabled drivers build config 00:02:21.483 net/i40e: not in enabled drivers build config 00:02:21.484 net/iavf: not in enabled drivers build config 00:02:21.484 net/ice: not in enabled drivers build config 00:02:21.484 net/idpf: not in enabled drivers build config 00:02:21.484 net/igc: not in enabled drivers build config 00:02:21.484 net/ionic: not in enabled drivers build config 00:02:21.484 net/ipn3ke: not in enabled drivers build config 00:02:21.484 net/ixgbe: not in enabled drivers build config 00:02:21.484 net/mana: not in enabled drivers build config 00:02:21.484 net/memif: not in enabled drivers build config 00:02:21.484 net/mlx4: not in enabled drivers build config 00:02:21.484 net/mlx5: not in enabled drivers build config 00:02:21.484 net/mvneta: not in enabled drivers build config 00:02:21.484 net/mvpp2: not in enabled drivers build config 00:02:21.484 net/netvsc: not in enabled drivers build config 00:02:21.484 net/nfb: not in enabled drivers build config 00:02:21.484 net/nfp: not in enabled drivers build config 00:02:21.484 net/ngbe: not in enabled drivers build config 00:02:21.484 net/null: not in enabled drivers build config 00:02:21.484 net/octeontx: not in enabled drivers build config 00:02:21.484 net/octeon_ep: not in enabled drivers build config 00:02:21.484 net/pcap: not in enabled drivers build config 00:02:21.484 net/pfe: not in enabled drivers build config 00:02:21.484 net/qede: not in enabled drivers build config 00:02:21.484 net/ring: not in enabled drivers build config 00:02:21.484 net/sfc: not in enabled drivers build config 00:02:21.484 net/softnic: not in enabled drivers build config 00:02:21.484 net/tap: not in enabled drivers build config 00:02:21.484 net/thunderx: not in enabled drivers build config 00:02:21.484 net/txgbe: not in enabled drivers build config 00:02:21.484 net/vdev_netvsc: not in enabled drivers build config 00:02:21.484 net/vhost: not in enabled drivers build config 00:02:21.484 net/virtio: not in enabled drivers build config 00:02:21.484 net/vmxnet3: not in enabled drivers build config 00:02:21.484 raw/*: missing internal dependency, "rawdev" 00:02:21.484 crypto/armv8: not in enabled drivers build config 00:02:21.484 crypto/bcmfs: not in enabled drivers build config 00:02:21.484 crypto/caam_jr: not in enabled drivers build config 00:02:21.484 crypto/ccp: not in enabled drivers build config 00:02:21.484 crypto/cnxk: not in enabled drivers build config 00:02:21.484 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.484 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.484 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.484 crypto/mlx5: not in enabled drivers build config 00:02:21.484 crypto/mvsam: not in enabled drivers build config 00:02:21.484 crypto/nitrox: not in enabled drivers build config 00:02:21.484 crypto/null: not in enabled drivers build config 00:02:21.484 crypto/octeontx: not in enabled drivers build config 00:02:21.484 crypto/openssl: not in enabled drivers build config 00:02:21.484 crypto/scheduler: not in enabled drivers build config 00:02:21.484 crypto/uadk: not in enabled drivers build config 00:02:21.484 crypto/virtio: not in enabled drivers build config 00:02:21.484 compress/isal: not in enabled drivers build config 00:02:21.484 compress/mlx5: not in enabled drivers build config 00:02:21.484 compress/nitrox: not in enabled drivers build config 00:02:21.484 compress/octeontx: not in enabled drivers build config 00:02:21.484 compress/zlib: not in enabled drivers build config 00:02:21.484 regex/*: missing internal dependency, "regexdev" 00:02:21.484 ml/*: missing internal dependency, "mldev" 00:02:21.484 vdpa/ifc: not in enabled drivers build config 00:02:21.484 vdpa/mlx5: not in enabled drivers build config 00:02:21.484 vdpa/nfp: not in enabled drivers build config 00:02:21.484 vdpa/sfc: not in enabled drivers build config 00:02:21.484 event/*: missing internal dependency, "eventdev" 00:02:21.484 baseband/*: missing internal dependency, "bbdev" 00:02:21.484 gpu/*: missing internal dependency, "gpudev" 00:02:21.484 00:02:21.484 00:02:21.484 Build targets in project: 84 00:02:21.484 00:02:21.484 DPDK 24.03.0 00:02:21.484 00:02:21.484 User defined options 00:02:21.484 buildtype : debug 00:02:21.484 default_library : shared 00:02:21.484 libdir : lib 00:02:21.484 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:21.484 b_sanitize : address 00:02:21.484 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.484 c_link_args : 00:02:21.484 cpu_instruction_set: native 00:02:21.484 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:21.484 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:21.484 enable_docs : false 00:02:21.484 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:21.484 enable_kmods : false 00:02:21.484 max_lcores : 128 00:02:21.484 tests : false 00:02:21.484 00:02:21.484 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.484 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:21.484 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.484 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.484 [3/267] Linking static target lib/librte_kvargs.a 00:02:21.484 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.484 [5/267] Linking static target lib/librte_log.a 00:02:21.484 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.484 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:21.484 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.484 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.484 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.484 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.484 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:21.484 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.484 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.742 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:21.742 [16/267] Linking static target lib/librte_telemetry.a 00:02:21.742 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:21.742 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.001 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.001 [20/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.001 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.001 [22/267] Linking target lib/librte_log.so.24.1 00:02:22.001 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.001 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.001 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.001 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.260 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.260 [28/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.260 [29/267] Linking target lib/librte_kvargs.so.24.1 00:02:22.260 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.260 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.518 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.518 [33/267] Linking target lib/librte_telemetry.so.24.1 00:02:22.518 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.518 [35/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.518 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.518 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:22.518 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.776 [39/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:22.776 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.776 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:22.776 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.776 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:22.776 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:22.776 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.776 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.034 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.034 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.034 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.292 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.292 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.292 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.292 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.292 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.292 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.292 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.549 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:23.549 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:23.549 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.549 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.549 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.549 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.807 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.807 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.807 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.807 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.807 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.807 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.065 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.065 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.065 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.065 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:24.065 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.065 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:24.065 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:24.065 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.065 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.325 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.325 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.325 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.325 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.584 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.584 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.584 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.584 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.584 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.584 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.841 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.841 [89/267] Linking static target lib/librte_rcu.a 00:02:24.841 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.841 [91/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.841 [92/267] Linking static target lib/librte_ring.a 00:02:24.841 [93/267] Linking static target lib/librte_eal.a 00:02:24.841 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.841 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.841 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.842 [97/267] Linking static target lib/librte_mempool.a 00:02:25.213 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:25.213 [99/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.213 [100/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.213 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:25.213 [102/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:25.213 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:25.213 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:25.497 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:25.497 [106/267] Linking static target lib/librte_meter.a 00:02:25.497 [107/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:25.755 [108/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:25.755 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:25.755 [110/267] Linking static target lib/librte_net.a 00:02:25.755 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:25.755 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.755 [113/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.755 [114/267] Linking static target lib/librte_mbuf.a 00:02:25.755 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.014 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.014 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:26.014 [118/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.014 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:26.272 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:26.272 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:26.272 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:26.531 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:26.531 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:26.531 [125/267] Linking static target lib/librte_pci.a 00:02:26.531 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:26.531 [127/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.788 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:26.788 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:26.788 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:26.788 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:26.788 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:26.788 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:26.788 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:26.788 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:26.788 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:26.788 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:26.788 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:26.788 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.788 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:26.788 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:26.788 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:27.044 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:27.044 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:27.044 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:27.044 [146/267] Linking static target lib/librte_cmdline.a 00:02:27.301 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:27.301 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:27.301 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:27.301 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:27.557 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:27.557 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:27.557 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:27.557 [154/267] Linking static target lib/librte_timer.a 00:02:27.557 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:27.814 [156/267] Linking static target lib/librte_hash.a 00:02:27.814 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:27.814 [158/267] Linking static target lib/librte_compressdev.a 00:02:27.814 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:27.814 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:27.814 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:27.814 [162/267] Linking static target lib/librte_ethdev.a 00:02:27.814 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:28.071 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:28.071 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:28.071 [166/267] Linking static target lib/librte_dmadev.a 00:02:28.071 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.071 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:28.328 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.328 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:28.328 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.328 [172/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.586 [173/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:28.586 [174/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.586 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.586 [176/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:28.586 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.586 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.586 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.844 [180/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.844 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:28.844 [182/267] Linking static target lib/librte_cryptodev.a 00:02:28.844 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:28.844 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.844 [185/267] Linking static target lib/librte_power.a 00:02:29.101 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.101 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:29.101 [188/267] Linking static target lib/librte_reorder.a 00:02:29.101 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:29.359 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:29.359 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.359 [192/267] Linking static target lib/librte_security.a 00:02:29.615 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.615 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:29.872 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:29.872 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:29.872 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.872 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.128 [199/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.128 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.128 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.128 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.128 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.456 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:30.456 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.456 [206/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.456 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.456 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.456 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.713 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.713 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.713 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.713 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.713 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:30.713 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.713 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.713 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:30.713 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.713 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.713 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.970 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.970 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.970 [223/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.970 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.970 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:31.227 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.485 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.414 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.671 [229/267] Linking target lib/librte_eal.so.24.1 00:02:32.671 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.671 [231/267] Linking target lib/librte_meter.so.24.1 00:02:32.929 [232/267] Linking target lib/librte_pci.so.24.1 00:02:32.929 [233/267] Linking target lib/librte_ring.so.24.1 00:02:32.929 [234/267] Linking target lib/librte_timer.so.24.1 00:02:32.929 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:32.929 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.929 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.929 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.929 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.929 [240/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.929 [241/267] Linking target lib/librte_rcu.so.24.1 00:02:32.929 [242/267] Linking target lib/librte_mempool.so.24.1 00:02:32.929 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.929 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.929 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:33.185 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.185 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:33.185 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.185 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:33.185 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:33.185 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:33.185 [252/267] Linking target lib/librte_net.so.24.1 00:02:33.185 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:33.443 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:33.443 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:33.443 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:33.443 [257/267] Linking target lib/librte_hash.so.24.1 00:02:33.443 [258/267] Linking target lib/librte_security.so.24.1 00:02:33.444 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.773 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.773 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:33.773 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:34.055 [263/267] Linking target lib/librte_power.so.24.1 00:02:34.620 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.620 [265/267] Linking static target lib/librte_vhost.a 00:02:35.992 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.992 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:35.992 INFO: autodetecting backend as ninja 00:02:35.992 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.886 CC lib/ut/ut.o 00:02:50.886 CC lib/log/log_flags.o 00:02:50.886 CC lib/log/log.o 00:02:50.886 CC lib/log/log_deprecated.o 00:02:50.886 CC lib/ut_mock/mock.o 00:02:50.886 LIB libspdk_ut_mock.a 00:02:50.886 LIB libspdk_ut.a 00:02:50.886 LIB libspdk_log.a 00:02:50.886 SO libspdk_ut_mock.so.6.0 00:02:50.886 SO libspdk_ut.so.2.0 00:02:50.886 SO libspdk_log.so.7.1 00:02:50.886 SYMLINK libspdk_ut_mock.so 00:02:50.886 SYMLINK libspdk_ut.so 00:02:50.886 SYMLINK libspdk_log.so 00:02:50.886 CC lib/ioat/ioat.o 00:02:50.886 CC lib/dma/dma.o 00:02:50.886 CC lib/util/base64.o 00:02:50.886 CC lib/util/bit_array.o 00:02:50.886 CC lib/util/cpuset.o 00:02:50.886 CC lib/util/crc16.o 00:02:50.886 CC lib/util/crc32c.o 00:02:50.886 CC lib/util/crc32.o 00:02:50.886 CXX lib/trace_parser/trace.o 00:02:50.886 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.886 CC lib/util/crc32_ieee.o 00:02:50.886 CC lib/util/crc64.o 00:02:50.886 CC lib/util/dif.o 00:02:50.886 CC lib/util/fd.o 00:02:50.886 LIB libspdk_dma.a 00:02:50.886 CC lib/util/fd_group.o 00:02:50.886 SO libspdk_dma.so.5.0 00:02:50.886 CC lib/util/file.o 00:02:50.886 CC lib/util/hexlify.o 00:02:50.886 CC lib/util/iov.o 00:02:50.886 SYMLINK libspdk_dma.so 00:02:50.886 CC lib/vfio_user/host/vfio_user.o 00:02:50.886 LIB libspdk_ioat.a 00:02:50.886 CC lib/util/math.o 00:02:50.886 CC lib/util/net.o 00:02:50.886 SO libspdk_ioat.so.7.0 00:02:50.886 CC lib/util/pipe.o 00:02:50.886 CC lib/util/strerror_tls.o 00:02:50.886 SYMLINK libspdk_ioat.so 00:02:50.886 CC lib/util/string.o 00:02:50.886 CC lib/util/uuid.o 00:02:51.144 CC lib/util/xor.o 00:02:51.144 CC lib/util/zipf.o 00:02:51.144 CC lib/util/md5.o 00:02:51.144 LIB libspdk_vfio_user.a 00:02:51.144 SO libspdk_vfio_user.so.5.0 00:02:51.144 SYMLINK libspdk_vfio_user.so 00:02:51.144 LIB libspdk_util.a 00:02:51.402 SO libspdk_util.so.10.1 00:02:51.402 LIB libspdk_trace_parser.a 00:02:51.660 SYMLINK libspdk_util.so 00:02:51.660 SO libspdk_trace_parser.so.6.0 00:02:51.660 SYMLINK libspdk_trace_parser.so 00:02:51.660 CC lib/vmd/vmd.o 00:02:51.660 CC lib/vmd/led.o 00:02:51.660 CC lib/conf/conf.o 00:02:51.660 CC lib/idxd/idxd.o 00:02:51.660 CC lib/idxd/idxd_user.o 00:02:51.660 CC lib/idxd/idxd_kernel.o 00:02:51.660 CC lib/json/json_parse.o 00:02:51.660 CC lib/json/json_util.o 00:02:51.660 CC lib/env_dpdk/env.o 00:02:51.660 CC lib/rdma_utils/rdma_utils.o 00:02:51.918 CC lib/env_dpdk/memory.o 00:02:51.918 CC lib/env_dpdk/pci.o 00:02:51.918 LIB libspdk_conf.a 00:02:51.918 LIB libspdk_rdma_utils.a 00:02:51.918 SO libspdk_conf.so.6.0 00:02:51.918 CC lib/env_dpdk/init.o 00:02:51.918 SO libspdk_rdma_utils.so.1.0 00:02:51.918 CC lib/json/json_write.o 00:02:51.918 CC lib/env_dpdk/threads.o 00:02:51.918 SYMLINK libspdk_conf.so 00:02:51.918 CC lib/env_dpdk/pci_ioat.o 00:02:51.918 SYMLINK libspdk_rdma_utils.so 00:02:51.918 CC lib/env_dpdk/pci_virtio.o 00:02:52.177 CC lib/env_dpdk/pci_vmd.o 00:02:52.177 CC lib/env_dpdk/pci_idxd.o 00:02:52.177 CC lib/env_dpdk/pci_event.o 00:02:52.177 LIB libspdk_json.a 00:02:52.177 SO libspdk_json.so.6.0 00:02:52.177 CC lib/env_dpdk/sigbus_handler.o 00:02:52.177 LIB libspdk_idxd.a 00:02:52.177 SYMLINK libspdk_json.so 00:02:52.177 CC lib/env_dpdk/pci_dpdk.o 00:02:52.177 SO libspdk_idxd.so.12.1 00:02:52.177 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.177 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.177 SYMLINK libspdk_idxd.so 00:02:52.436 CC lib/rdma_provider/common.o 00:02:52.436 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.436 LIB libspdk_vmd.a 00:02:52.436 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.436 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.436 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.436 SO libspdk_vmd.so.6.0 00:02:52.436 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.436 SYMLINK libspdk_vmd.so 00:02:52.436 LIB libspdk_rdma_provider.a 00:02:52.436 SO libspdk_rdma_provider.so.7.0 00:02:52.694 SYMLINK libspdk_rdma_provider.so 00:02:52.694 LIB libspdk_jsonrpc.a 00:02:52.694 SO libspdk_jsonrpc.so.6.0 00:02:52.694 SYMLINK libspdk_jsonrpc.so 00:02:52.952 CC lib/rpc/rpc.o 00:02:53.209 LIB libspdk_env_dpdk.a 00:02:53.209 LIB libspdk_rpc.a 00:02:53.209 SO libspdk_rpc.so.6.0 00:02:53.209 SO libspdk_env_dpdk.so.15.1 00:02:53.209 SYMLINK libspdk_rpc.so 00:02:53.467 SYMLINK libspdk_env_dpdk.so 00:02:53.467 CC lib/keyring/keyring.o 00:02:53.467 CC lib/keyring/keyring_rpc.o 00:02:53.467 CC lib/notify/notify_rpc.o 00:02:53.467 CC lib/notify/notify.o 00:02:53.467 CC lib/trace/trace.o 00:02:53.467 CC lib/trace/trace_rpc.o 00:02:53.467 CC lib/trace/trace_flags.o 00:02:53.725 LIB libspdk_notify.a 00:02:53.725 SO libspdk_notify.so.6.0 00:02:53.725 LIB libspdk_keyring.a 00:02:53.725 SYMLINK libspdk_notify.so 00:02:53.725 SO libspdk_keyring.so.2.0 00:02:53.725 LIB libspdk_trace.a 00:02:53.725 SO libspdk_trace.so.11.0 00:02:53.725 SYMLINK libspdk_keyring.so 00:02:53.725 SYMLINK libspdk_trace.so 00:02:53.984 CC lib/thread/thread.o 00:02:53.984 CC lib/thread/iobuf.o 00:02:53.984 CC lib/sock/sock_rpc.o 00:02:53.984 CC lib/sock/sock.o 00:02:54.550 LIB libspdk_sock.a 00:02:54.550 SO libspdk_sock.so.10.0 00:02:54.550 SYMLINK libspdk_sock.so 00:02:54.808 CC lib/nvme/nvme_fabric.o 00:02:54.808 CC lib/nvme/nvme_ns.o 00:02:54.808 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.808 CC lib/nvme/nvme_pcie_common.o 00:02:54.808 CC lib/nvme/nvme_ctrlr.o 00:02:54.808 CC lib/nvme/nvme_ns_cmd.o 00:02:54.808 CC lib/nvme/nvme_qpair.o 00:02:54.808 CC lib/nvme/nvme_pcie.o 00:02:54.808 CC lib/nvme/nvme.o 00:02:55.375 CC lib/nvme/nvme_quirks.o 00:02:55.632 CC lib/nvme/nvme_transport.o 00:02:55.632 CC lib/nvme/nvme_discovery.o 00:02:55.632 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.632 LIB libspdk_thread.a 00:02:55.632 SO libspdk_thread.so.11.0 00:02:55.632 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.632 SYMLINK libspdk_thread.so 00:02:55.632 CC lib/nvme/nvme_tcp.o 00:02:55.890 CC lib/blob/blobstore.o 00:02:55.890 CC lib/accel/accel.o 00:02:55.890 CC lib/init/json_config.o 00:02:56.149 CC lib/nvme/nvme_opal.o 00:02:56.149 CC lib/virtio/virtio.o 00:02:56.149 CC lib/virtio/virtio_vhost_user.o 00:02:56.149 CC lib/fsdev/fsdev.o 00:02:56.149 CC lib/blob/request.o 00:02:56.149 CC lib/init/subsystem.o 00:02:56.406 CC lib/blob/zeroes.o 00:02:56.406 CC lib/init/subsystem_rpc.o 00:02:56.406 CC lib/virtio/virtio_vfio_user.o 00:02:56.664 CC lib/init/rpc.o 00:02:56.664 CC lib/blob/blob_bs_dev.o 00:02:56.664 CC lib/fsdev/fsdev_io.o 00:02:56.664 CC lib/nvme/nvme_io_msg.o 00:02:56.664 CC lib/virtio/virtio_pci.o 00:02:56.664 LIB libspdk_init.a 00:02:56.664 SO libspdk_init.so.6.0 00:02:56.664 CC lib/fsdev/fsdev_rpc.o 00:02:56.664 CC lib/accel/accel_rpc.o 00:02:57.088 SYMLINK libspdk_init.so 00:02:57.088 CC lib/accel/accel_sw.o 00:02:57.088 LIB libspdk_fsdev.a 00:02:57.088 CC lib/event/app.o 00:02:57.088 CC lib/event/reactor.o 00:02:57.088 LIB libspdk_virtio.a 00:02:57.088 CC lib/event/log_rpc.o 00:02:57.088 SO libspdk_fsdev.so.2.0 00:02:57.088 SO libspdk_virtio.so.7.0 00:02:57.346 SYMLINK libspdk_fsdev.so 00:02:57.346 CC lib/nvme/nvme_poll_group.o 00:02:57.346 SYMLINK libspdk_virtio.so 00:02:57.346 CC lib/nvme/nvme_zns.o 00:02:57.346 CC lib/event/app_rpc.o 00:02:57.346 CC lib/event/scheduler_static.o 00:02:57.346 CC lib/nvme/nvme_stubs.o 00:02:57.346 LIB libspdk_accel.a 00:02:57.346 SO libspdk_accel.so.16.0 00:02:57.346 CC lib/nvme/nvme_auth.o 00:02:57.346 SYMLINK libspdk_accel.so 00:02:57.346 CC lib/nvme/nvme_cuse.o 00:02:57.346 CC lib/nvme/nvme_rdma.o 00:02:57.346 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:57.346 LIB libspdk_event.a 00:02:57.346 CC lib/bdev/bdev.o 00:02:57.346 SO libspdk_event.so.14.0 00:02:57.603 SYMLINK libspdk_event.so 00:02:57.603 CC lib/bdev/bdev_rpc.o 00:02:57.603 CC lib/bdev/bdev_zone.o 00:02:57.603 CC lib/bdev/part.o 00:02:57.603 CC lib/bdev/scsi_nvme.o 00:02:58.165 LIB libspdk_fuse_dispatcher.a 00:02:58.165 SO libspdk_fuse_dispatcher.so.1.0 00:02:58.165 SYMLINK libspdk_fuse_dispatcher.so 00:02:58.731 LIB libspdk_nvme.a 00:02:58.731 SO libspdk_nvme.so.15.0 00:02:58.989 LIB libspdk_blob.a 00:02:58.989 SYMLINK libspdk_nvme.so 00:02:58.989 SO libspdk_blob.so.12.0 00:02:59.248 SYMLINK libspdk_blob.so 00:02:59.510 CC lib/lvol/lvol.o 00:02:59.510 CC lib/blobfs/blobfs.o 00:02:59.510 CC lib/blobfs/tree.o 00:03:00.443 LIB libspdk_bdev.a 00:03:00.443 SO libspdk_bdev.so.17.0 00:03:00.443 LIB libspdk_blobfs.a 00:03:00.443 LIB libspdk_lvol.a 00:03:00.443 SO libspdk_blobfs.so.11.0 00:03:00.443 SO libspdk_lvol.so.11.0 00:03:00.443 SYMLINK libspdk_bdev.so 00:03:00.443 SYMLINK libspdk_lvol.so 00:03:00.443 SYMLINK libspdk_blobfs.so 00:03:00.443 CC lib/nvmf/ctrlr.o 00:03:00.443 CC lib/nvmf/ctrlr_discovery.o 00:03:00.443 CC lib/nvmf/subsystem.o 00:03:00.443 CC lib/nvmf/nvmf.o 00:03:00.443 CC lib/nvmf/ctrlr_bdev.o 00:03:00.443 CC lib/nvmf/nvmf_rpc.o 00:03:00.443 CC lib/ftl/ftl_core.o 00:03:00.443 CC lib/nbd/nbd.o 00:03:00.443 CC lib/ublk/ublk.o 00:03:00.443 CC lib/scsi/dev.o 00:03:00.701 CC lib/scsi/lun.o 00:03:00.958 CC lib/ftl/ftl_init.o 00:03:00.958 CC lib/nbd/nbd_rpc.o 00:03:00.958 CC lib/nvmf/transport.o 00:03:00.958 CC lib/scsi/port.o 00:03:01.214 LIB libspdk_nbd.a 00:03:01.214 CC lib/ftl/ftl_layout.o 00:03:01.214 SO libspdk_nbd.so.7.0 00:03:01.214 CC lib/scsi/scsi.o 00:03:01.214 SYMLINK libspdk_nbd.so 00:03:01.214 CC lib/nvmf/tcp.o 00:03:01.214 CC lib/nvmf/stubs.o 00:03:01.214 CC lib/ublk/ublk_rpc.o 00:03:01.214 CC lib/scsi/scsi_bdev.o 00:03:01.471 LIB libspdk_ublk.a 00:03:01.471 SO libspdk_ublk.so.3.0 00:03:01.471 CC lib/ftl/ftl_debug.o 00:03:01.471 CC lib/nvmf/mdns_server.o 00:03:01.471 SYMLINK libspdk_ublk.so 00:03:01.471 CC lib/ftl/ftl_io.o 00:03:01.471 CC lib/ftl/ftl_sb.o 00:03:01.471 CC lib/nvmf/rdma.o 00:03:01.728 CC lib/nvmf/auth.o 00:03:01.728 CC lib/ftl/ftl_l2p.o 00:03:01.728 CC lib/ftl/ftl_l2p_flat.o 00:03:01.728 CC lib/ftl/ftl_nv_cache.o 00:03:01.728 CC lib/ftl/ftl_band.o 00:03:01.728 CC lib/scsi/scsi_pr.o 00:03:01.728 CC lib/scsi/scsi_rpc.o 00:03:01.728 CC lib/scsi/task.o 00:03:01.728 CC lib/ftl/ftl_band_ops.o 00:03:01.985 CC lib/ftl/ftl_writer.o 00:03:01.985 CC lib/ftl/ftl_rq.o 00:03:01.985 LIB libspdk_scsi.a 00:03:01.985 CC lib/ftl/ftl_reloc.o 00:03:01.985 SO libspdk_scsi.so.9.0 00:03:01.985 CC lib/ftl/ftl_l2p_cache.o 00:03:01.985 CC lib/ftl/ftl_p2l.o 00:03:02.243 SYMLINK libspdk_scsi.so 00:03:02.243 CC lib/ftl/ftl_p2l_log.o 00:03:02.243 CC lib/ftl/mngt/ftl_mngt.o 00:03:02.243 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:02.243 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:02.531 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:02.531 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:02.531 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:02.531 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:02.531 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:02.531 CC lib/vhost/vhost.o 00:03:02.531 CC lib/iscsi/conn.o 00:03:02.531 CC lib/iscsi/init_grp.o 00:03:02.531 CC lib/iscsi/iscsi.o 00:03:02.788 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:02.788 CC lib/vhost/vhost_rpc.o 00:03:02.788 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:02.788 CC lib/iscsi/param.o 00:03:02.788 CC lib/iscsi/portal_grp.o 00:03:02.788 CC lib/iscsi/tgt_node.o 00:03:03.045 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:03.045 CC lib/iscsi/iscsi_subsystem.o 00:03:03.045 CC lib/iscsi/iscsi_rpc.o 00:03:03.045 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:03.045 CC lib/iscsi/task.o 00:03:03.302 CC lib/vhost/vhost_scsi.o 00:03:03.302 CC lib/vhost/vhost_blk.o 00:03:03.302 CC lib/vhost/rte_vhost_user.o 00:03:03.302 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:03.302 CC lib/ftl/utils/ftl_conf.o 00:03:03.302 CC lib/ftl/utils/ftl_md.o 00:03:03.302 CC lib/ftl/utils/ftl_mempool.o 00:03:03.302 CC lib/ftl/utils/ftl_bitmap.o 00:03:03.559 CC lib/ftl/utils/ftl_property.o 00:03:03.559 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:03.559 LIB libspdk_nvmf.a 00:03:03.559 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:03.559 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:03.559 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:03.559 SO libspdk_nvmf.so.20.0 00:03:03.816 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:03.816 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:03.816 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:03.816 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:03.816 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:03.816 SYMLINK libspdk_nvmf.so 00:03:03.816 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:03.816 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:03.816 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:04.072 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:04.072 CC lib/ftl/base/ftl_base_dev.o 00:03:04.072 CC lib/ftl/base/ftl_base_bdev.o 00:03:04.072 CC lib/ftl/ftl_trace.o 00:03:04.072 LIB libspdk_iscsi.a 00:03:04.072 SO libspdk_iscsi.so.8.0 00:03:04.330 LIB libspdk_vhost.a 00:03:04.330 SYMLINK libspdk_iscsi.so 00:03:04.330 LIB libspdk_ftl.a 00:03:04.330 SO libspdk_vhost.so.8.0 00:03:04.330 SYMLINK libspdk_vhost.so 00:03:04.587 SO libspdk_ftl.so.9.0 00:03:04.587 SYMLINK libspdk_ftl.so 00:03:04.845 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.101 CC module/accel/dsa/accel_dsa.o 00:03:05.101 CC module/keyring/file/keyring.o 00:03:05.101 CC module/accel/error/accel_error.o 00:03:05.101 CC module/keyring/linux/keyring.o 00:03:05.101 CC module/blob/bdev/blob_bdev.o 00:03:05.101 CC module/sock/posix/posix.o 00:03:05.101 CC module/accel/ioat/accel_ioat.o 00:03:05.101 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.101 CC module/fsdev/aio/fsdev_aio.o 00:03:05.101 LIB libspdk_env_dpdk_rpc.a 00:03:05.101 CC module/keyring/linux/keyring_rpc.o 00:03:05.101 CC module/keyring/file/keyring_rpc.o 00:03:05.101 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.101 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.101 CC module/accel/error/accel_error_rpc.o 00:03:05.101 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.358 LIB libspdk_keyring_linux.a 00:03:05.358 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:05.358 LIB libspdk_scheduler_dynamic.a 00:03:05.358 LIB libspdk_blob_bdev.a 00:03:05.358 SO libspdk_keyring_linux.so.1.0 00:03:05.358 LIB libspdk_keyring_file.a 00:03:05.358 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.358 SO libspdk_blob_bdev.so.12.0 00:03:05.358 SO libspdk_keyring_file.so.2.0 00:03:05.358 LIB libspdk_accel_ioat.a 00:03:05.358 SYMLINK libspdk_keyring_linux.so 00:03:05.358 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.358 LIB libspdk_accel_error.a 00:03:05.358 SO libspdk_accel_ioat.so.6.0 00:03:05.358 SYMLINK libspdk_blob_bdev.so 00:03:05.358 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.358 SYMLINK libspdk_keyring_file.so 00:03:05.358 SO libspdk_accel_error.so.2.0 00:03:05.358 SYMLINK libspdk_accel_ioat.so 00:03:05.358 CC module/fsdev/aio/linux_aio_mgr.o 00:03:05.358 SYMLINK libspdk_accel_error.so 00:03:05.358 CC module/accel/iaa/accel_iaa.o 00:03:05.358 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.358 LIB libspdk_accel_dsa.a 00:03:05.615 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.615 SO libspdk_accel_dsa.so.5.0 00:03:05.615 CC module/bdev/error/vbdev_error.o 00:03:05.615 SYMLINK libspdk_accel_dsa.so 00:03:05.615 CC module/bdev/delay/vbdev_delay.o 00:03:05.615 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.615 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.615 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.615 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.615 LIB libspdk_scheduler_gscheduler.a 00:03:05.615 CC module/bdev/gpt/gpt.o 00:03:05.615 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.615 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.615 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.615 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.615 LIB libspdk_accel_iaa.a 00:03:05.872 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.872 SO libspdk_accel_iaa.so.3.0 00:03:05.872 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.872 LIB libspdk_fsdev_aio.a 00:03:05.872 LIB libspdk_sock_posix.a 00:03:05.872 SO libspdk_sock_posix.so.6.0 00:03:05.872 SO libspdk_fsdev_aio.so.1.0 00:03:05.872 SYMLINK libspdk_accel_iaa.so 00:03:05.872 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.872 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.872 SYMLINK libspdk_fsdev_aio.so 00:03:05.872 SYMLINK libspdk_sock_posix.so 00:03:05.872 CC module/bdev/malloc/bdev_malloc.o 00:03:05.872 CC module/bdev/null/bdev_null.o 00:03:05.872 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.872 LIB libspdk_bdev_error.a 00:03:05.872 LIB libspdk_blobfs_bdev.a 00:03:05.872 LIB libspdk_bdev_gpt.a 00:03:05.872 SO libspdk_blobfs_bdev.so.6.0 00:03:05.872 SO libspdk_bdev_error.so.6.0 00:03:06.128 CC module/bdev/nvme/bdev_nvme.o 00:03:06.128 SO libspdk_bdev_gpt.so.6.0 00:03:06.128 CC module/bdev/passthru/vbdev_passthru.o 00:03:06.128 SYMLINK libspdk_blobfs_bdev.so 00:03:06.128 SYMLINK libspdk_bdev_gpt.so 00:03:06.128 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:06.128 SYMLINK libspdk_bdev_error.so 00:03:06.128 CC module/bdev/null/bdev_null_rpc.o 00:03:06.128 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:06.128 LIB libspdk_bdev_delay.a 00:03:06.128 SO libspdk_bdev_delay.so.6.0 00:03:06.128 LIB libspdk_bdev_lvol.a 00:03:06.128 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.128 SO libspdk_bdev_lvol.so.6.0 00:03:06.128 SYMLINK libspdk_bdev_delay.so 00:03:06.128 LIB libspdk_bdev_null.a 00:03:06.128 CC module/bdev/raid/bdev_raid.o 00:03:06.128 LIB libspdk_bdev_malloc.a 00:03:06.386 SO libspdk_bdev_null.so.6.0 00:03:06.386 CC module/bdev/split/vbdev_split.o 00:03:06.386 SO libspdk_bdev_malloc.so.6.0 00:03:06.386 SYMLINK libspdk_bdev_lvol.so 00:03:06.386 SYMLINK libspdk_bdev_null.so 00:03:06.386 CC module/bdev/raid/bdev_raid_rpc.o 00:03:06.386 LIB libspdk_bdev_passthru.a 00:03:06.386 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:06.386 SYMLINK libspdk_bdev_malloc.so 00:03:06.386 CC module/bdev/raid/bdev_raid_sb.o 00:03:06.386 SO libspdk_bdev_passthru.so.6.0 00:03:06.386 CC module/bdev/xnvme/bdev_xnvme.o 00:03:06.386 SYMLINK libspdk_bdev_passthru.so 00:03:06.386 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:06.386 CC module/bdev/aio/bdev_aio.o 00:03:06.386 CC module/bdev/split/vbdev_split_rpc.o 00:03:06.386 CC module/bdev/aio/bdev_aio_rpc.o 00:03:06.644 CC module/bdev/raid/raid0.o 00:03:06.644 CC module/bdev/raid/raid1.o 00:03:06.644 CC module/bdev/raid/concat.o 00:03:06.644 LIB libspdk_bdev_split.a 00:03:06.644 LIB libspdk_bdev_zone_block.a 00:03:06.644 SO libspdk_bdev_split.so.6.0 00:03:06.644 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:06.644 SO libspdk_bdev_zone_block.so.6.0 00:03:06.644 SYMLINK libspdk_bdev_split.so 00:03:06.644 SYMLINK libspdk_bdev_zone_block.so 00:03:06.644 CC module/bdev/nvme/nvme_rpc.o 00:03:06.644 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.902 LIB libspdk_bdev_aio.a 00:03:06.902 LIB libspdk_bdev_xnvme.a 00:03:06.902 SO libspdk_bdev_xnvme.so.3.0 00:03:06.902 SO libspdk_bdev_aio.so.6.0 00:03:06.902 CC module/bdev/nvme/vbdev_opal.o 00:03:06.902 SYMLINK libspdk_bdev_aio.so 00:03:06.902 SYMLINK libspdk_bdev_xnvme.so 00:03:06.902 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.902 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.902 CC module/bdev/ftl/bdev_ftl.o 00:03:06.902 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:06.902 CC module/bdev/iscsi/bdev_iscsi.o 00:03:06.902 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.902 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:07.160 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:07.160 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:07.160 LIB libspdk_bdev_raid.a 00:03:07.160 SO libspdk_bdev_raid.so.6.0 00:03:07.160 LIB libspdk_bdev_ftl.a 00:03:07.160 SYMLINK libspdk_bdev_raid.so 00:03:07.160 LIB libspdk_bdev_iscsi.a 00:03:07.160 SO libspdk_bdev_ftl.so.6.0 00:03:07.160 SO libspdk_bdev_iscsi.so.6.0 00:03:07.160 SYMLINK libspdk_bdev_ftl.so 00:03:07.418 SYMLINK libspdk_bdev_iscsi.so 00:03:07.418 LIB libspdk_bdev_virtio.a 00:03:07.418 SO libspdk_bdev_virtio.so.6.0 00:03:07.679 SYMLINK libspdk_bdev_virtio.so 00:03:08.636 LIB libspdk_bdev_nvme.a 00:03:08.636 SO libspdk_bdev_nvme.so.7.1 00:03:08.893 SYMLINK libspdk_bdev_nvme.so 00:03:09.155 CC module/event/subsystems/fsdev/fsdev.o 00:03:09.155 CC module/event/subsystems/keyring/keyring.o 00:03:09.155 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.155 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.155 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.155 CC module/event/subsystems/vmd/vmd.o 00:03:09.155 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.155 CC module/event/subsystems/sock/sock.o 00:03:09.155 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.421 LIB libspdk_event_vmd.a 00:03:09.421 LIB libspdk_event_vhost_blk.a 00:03:09.421 LIB libspdk_event_keyring.a 00:03:09.421 LIB libspdk_event_fsdev.a 00:03:09.421 SO libspdk_event_vmd.so.6.0 00:03:09.421 LIB libspdk_event_iobuf.a 00:03:09.421 SO libspdk_event_keyring.so.1.0 00:03:09.421 SO libspdk_event_vhost_blk.so.3.0 00:03:09.421 SO libspdk_event_fsdev.so.1.0 00:03:09.421 SO libspdk_event_iobuf.so.3.0 00:03:09.421 SYMLINK libspdk_event_keyring.so 00:03:09.421 SYMLINK libspdk_event_vhost_blk.so 00:03:09.421 SYMLINK libspdk_event_vmd.so 00:03:09.421 SYMLINK libspdk_event_fsdev.so 00:03:09.421 LIB libspdk_event_scheduler.a 00:03:09.421 LIB libspdk_event_sock.a 00:03:09.421 SYMLINK libspdk_event_iobuf.so 00:03:09.421 SO libspdk_event_scheduler.so.4.0 00:03:09.421 SO libspdk_event_sock.so.5.0 00:03:09.421 SYMLINK libspdk_event_scheduler.so 00:03:09.421 SYMLINK libspdk_event_sock.so 00:03:09.679 CC module/event/subsystems/accel/accel.o 00:03:09.679 LIB libspdk_event_accel.a 00:03:09.679 SO libspdk_event_accel.so.6.0 00:03:09.679 SYMLINK libspdk_event_accel.so 00:03:09.936 CC module/event/subsystems/bdev/bdev.o 00:03:10.194 LIB libspdk_event_bdev.a 00:03:10.194 SO libspdk_event_bdev.so.6.0 00:03:10.194 SYMLINK libspdk_event_bdev.so 00:03:10.452 CC module/event/subsystems/nbd/nbd.o 00:03:10.452 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:10.452 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:10.452 CC module/event/subsystems/scsi/scsi.o 00:03:10.452 CC module/event/subsystems/ublk/ublk.o 00:03:10.452 LIB libspdk_event_nbd.a 00:03:10.452 LIB libspdk_event_scsi.a 00:03:10.452 SO libspdk_event_nbd.so.6.0 00:03:10.452 LIB libspdk_event_ublk.a 00:03:10.452 SO libspdk_event_ublk.so.3.0 00:03:10.452 SO libspdk_event_scsi.so.6.0 00:03:10.711 SYMLINK libspdk_event_nbd.so 00:03:10.711 SYMLINK libspdk_event_scsi.so 00:03:10.711 SYMLINK libspdk_event_ublk.so 00:03:10.711 LIB libspdk_event_nvmf.a 00:03:10.711 SO libspdk_event_nvmf.so.6.0 00:03:10.711 SYMLINK libspdk_event_nvmf.so 00:03:10.711 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:10.711 CC module/event/subsystems/iscsi/iscsi.o 00:03:10.970 LIB libspdk_event_vhost_scsi.a 00:03:10.970 SO libspdk_event_vhost_scsi.so.3.0 00:03:10.970 LIB libspdk_event_iscsi.a 00:03:10.970 SYMLINK libspdk_event_vhost_scsi.so 00:03:10.970 SO libspdk_event_iscsi.so.6.0 00:03:10.970 SYMLINK libspdk_event_iscsi.so 00:03:11.228 SO libspdk.so.6.0 00:03:11.228 SYMLINK libspdk.so 00:03:11.228 CC test/rpc_client/rpc_client_test.o 00:03:11.228 TEST_HEADER include/spdk/accel.h 00:03:11.228 TEST_HEADER include/spdk/accel_module.h 00:03:11.228 TEST_HEADER include/spdk/assert.h 00:03:11.228 TEST_HEADER include/spdk/barrier.h 00:03:11.228 TEST_HEADER include/spdk/base64.h 00:03:11.228 TEST_HEADER include/spdk/bdev.h 00:03:11.520 CXX app/trace/trace.o 00:03:11.521 TEST_HEADER include/spdk/bdev_module.h 00:03:11.521 TEST_HEADER include/spdk/bdev_zone.h 00:03:11.521 TEST_HEADER include/spdk/bit_array.h 00:03:11.521 TEST_HEADER include/spdk/bit_pool.h 00:03:11.521 TEST_HEADER include/spdk/blob_bdev.h 00:03:11.521 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:11.521 TEST_HEADER include/spdk/blobfs.h 00:03:11.521 TEST_HEADER include/spdk/blob.h 00:03:11.521 TEST_HEADER include/spdk/conf.h 00:03:11.521 TEST_HEADER include/spdk/config.h 00:03:11.521 TEST_HEADER include/spdk/cpuset.h 00:03:11.521 TEST_HEADER include/spdk/crc16.h 00:03:11.521 TEST_HEADER include/spdk/crc32.h 00:03:11.521 TEST_HEADER include/spdk/crc64.h 00:03:11.521 TEST_HEADER include/spdk/dif.h 00:03:11.521 TEST_HEADER include/spdk/dma.h 00:03:11.521 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:11.521 TEST_HEADER include/spdk/endian.h 00:03:11.521 TEST_HEADER include/spdk/env_dpdk.h 00:03:11.521 TEST_HEADER include/spdk/env.h 00:03:11.521 TEST_HEADER include/spdk/event.h 00:03:11.521 TEST_HEADER include/spdk/fd_group.h 00:03:11.521 TEST_HEADER include/spdk/fd.h 00:03:11.521 TEST_HEADER include/spdk/file.h 00:03:11.521 TEST_HEADER include/spdk/fsdev.h 00:03:11.521 TEST_HEADER include/spdk/fsdev_module.h 00:03:11.521 TEST_HEADER include/spdk/ftl.h 00:03:11.521 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:11.521 TEST_HEADER include/spdk/gpt_spec.h 00:03:11.521 TEST_HEADER include/spdk/hexlify.h 00:03:11.521 TEST_HEADER include/spdk/histogram_data.h 00:03:11.521 TEST_HEADER include/spdk/idxd.h 00:03:11.521 TEST_HEADER include/spdk/idxd_spec.h 00:03:11.521 CC examples/ioat/perf/perf.o 00:03:11.521 CC examples/util/zipf/zipf.o 00:03:11.521 TEST_HEADER include/spdk/init.h 00:03:11.521 TEST_HEADER include/spdk/ioat.h 00:03:11.521 TEST_HEADER include/spdk/ioat_spec.h 00:03:11.521 CC test/thread/poller_perf/poller_perf.o 00:03:11.521 TEST_HEADER include/spdk/iscsi_spec.h 00:03:11.521 TEST_HEADER include/spdk/json.h 00:03:11.521 TEST_HEADER include/spdk/jsonrpc.h 00:03:11.521 TEST_HEADER include/spdk/keyring.h 00:03:11.521 TEST_HEADER include/spdk/keyring_module.h 00:03:11.521 TEST_HEADER include/spdk/likely.h 00:03:11.521 TEST_HEADER include/spdk/log.h 00:03:11.521 TEST_HEADER include/spdk/lvol.h 00:03:11.521 TEST_HEADER include/spdk/md5.h 00:03:11.521 TEST_HEADER include/spdk/memory.h 00:03:11.521 TEST_HEADER include/spdk/mmio.h 00:03:11.521 TEST_HEADER include/spdk/nbd.h 00:03:11.521 TEST_HEADER include/spdk/net.h 00:03:11.521 TEST_HEADER include/spdk/notify.h 00:03:11.521 CC test/dma/test_dma/test_dma.o 00:03:11.521 TEST_HEADER include/spdk/nvme.h 00:03:11.521 TEST_HEADER include/spdk/nvme_intel.h 00:03:11.521 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:11.521 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:11.521 TEST_HEADER include/spdk/nvme_spec.h 00:03:11.521 TEST_HEADER include/spdk/nvme_zns.h 00:03:11.521 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:11.521 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:11.521 TEST_HEADER include/spdk/nvmf.h 00:03:11.521 TEST_HEADER include/spdk/nvmf_spec.h 00:03:11.521 TEST_HEADER include/spdk/nvmf_transport.h 00:03:11.521 TEST_HEADER include/spdk/opal.h 00:03:11.521 TEST_HEADER include/spdk/opal_spec.h 00:03:11.521 TEST_HEADER include/spdk/pci_ids.h 00:03:11.521 TEST_HEADER include/spdk/pipe.h 00:03:11.521 TEST_HEADER include/spdk/queue.h 00:03:11.521 CC test/app/bdev_svc/bdev_svc.o 00:03:11.521 TEST_HEADER include/spdk/reduce.h 00:03:11.521 TEST_HEADER include/spdk/rpc.h 00:03:11.521 TEST_HEADER include/spdk/scheduler.h 00:03:11.521 LINK rpc_client_test 00:03:11.521 TEST_HEADER include/spdk/scsi.h 00:03:11.521 TEST_HEADER include/spdk/scsi_spec.h 00:03:11.521 TEST_HEADER include/spdk/sock.h 00:03:11.521 TEST_HEADER include/spdk/stdinc.h 00:03:11.521 TEST_HEADER include/spdk/string.h 00:03:11.521 TEST_HEADER include/spdk/thread.h 00:03:11.521 TEST_HEADER include/spdk/trace.h 00:03:11.521 CC test/env/mem_callbacks/mem_callbacks.o 00:03:11.521 TEST_HEADER include/spdk/trace_parser.h 00:03:11.521 TEST_HEADER include/spdk/tree.h 00:03:11.521 TEST_HEADER include/spdk/ublk.h 00:03:11.521 TEST_HEADER include/spdk/util.h 00:03:11.521 TEST_HEADER include/spdk/uuid.h 00:03:11.521 TEST_HEADER include/spdk/version.h 00:03:11.521 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:11.521 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:11.521 TEST_HEADER include/spdk/vhost.h 00:03:11.521 TEST_HEADER include/spdk/vmd.h 00:03:11.521 TEST_HEADER include/spdk/xor.h 00:03:11.521 TEST_HEADER include/spdk/zipf.h 00:03:11.521 CXX test/cpp_headers/accel.o 00:03:11.521 LINK zipf 00:03:11.521 LINK interrupt_tgt 00:03:11.521 LINK poller_perf 00:03:11.521 LINK bdev_svc 00:03:11.521 CXX test/cpp_headers/accel_module.o 00:03:11.795 LINK ioat_perf 00:03:11.795 CXX test/cpp_headers/assert.o 00:03:11.795 LINK spdk_trace 00:03:11.795 CC test/env/vtophys/vtophys.o 00:03:11.795 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:11.795 CC app/trace_record/trace_record.o 00:03:11.795 CC examples/ioat/verify/verify.o 00:03:11.795 CXX test/cpp_headers/barrier.o 00:03:11.795 LINK vtophys 00:03:11.795 LINK env_dpdk_post_init 00:03:11.795 CC examples/thread/thread/thread_ex.o 00:03:12.053 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.053 LINK test_dma 00:03:12.053 CXX test/cpp_headers/base64.o 00:03:12.053 LINK spdk_trace_record 00:03:12.053 LINK mem_callbacks 00:03:12.053 CC examples/sock/hello_world/hello_sock.o 00:03:12.053 LINK verify 00:03:12.053 CC test/app/histogram_perf/histogram_perf.o 00:03:12.053 CXX test/cpp_headers/bdev.o 00:03:12.053 LINK thread 00:03:12.053 CC app/nvmf_tgt/nvmf_main.o 00:03:12.053 CC test/app/jsoncat/jsoncat.o 00:03:12.053 CC test/env/memory/memory_ut.o 00:03:12.053 CC test/env/pci/pci_ut.o 00:03:12.312 LINK histogram_perf 00:03:12.312 LINK hello_sock 00:03:12.312 CC test/app/stub/stub.o 00:03:12.312 CXX test/cpp_headers/bdev_module.o 00:03:12.312 LINK jsoncat 00:03:12.312 LINK nvme_fuzz 00:03:12.312 LINK nvmf_tgt 00:03:12.312 CXX test/cpp_headers/bdev_zone.o 00:03:12.312 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:12.569 LINK stub 00:03:12.569 LINK pci_ut 00:03:12.569 CXX test/cpp_headers/bit_array.o 00:03:12.570 CC examples/vmd/lsvmd/lsvmd.o 00:03:12.570 CC examples/vmd/led/led.o 00:03:12.570 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:12.570 CC examples/idxd/perf/perf.o 00:03:12.570 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.570 LINK lsvmd 00:03:12.570 LINK led 00:03:12.828 CXX test/cpp_headers/bit_pool.o 00:03:12.828 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:12.828 CC app/spdk_tgt/spdk_tgt.o 00:03:12.828 CC app/spdk_lspci/spdk_lspci.o 00:03:12.828 CXX test/cpp_headers/blob_bdev.o 00:03:12.828 LINK iscsi_tgt 00:03:12.828 LINK spdk_lspci 00:03:12.828 LINK idxd_perf 00:03:12.828 LINK spdk_tgt 00:03:12.828 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.828 CC test/event/event_perf/event_perf.o 00:03:13.085 CC test/nvme/aer/aer.o 00:03:13.085 CC test/event/reactor/reactor.o 00:03:13.085 CC test/nvme/reset/reset.o 00:03:13.085 LINK event_perf 00:03:13.085 CXX test/cpp_headers/blobfs.o 00:03:13.085 LINK memory_ut 00:03:13.085 LINK vhost_fuzz 00:03:13.085 LINK reactor 00:03:13.085 CC app/spdk_nvme_perf/perf.o 00:03:13.343 LINK aer 00:03:13.343 CXX test/cpp_headers/blob.o 00:03:13.343 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.343 CC app/spdk_nvme_identify/identify.o 00:03:13.343 LINK reset 00:03:13.343 CC app/spdk_nvme_discover/discovery_aer.o 00:03:13.343 CC app/spdk_top/spdk_top.o 00:03:13.343 CC test/event/reactor_perf/reactor_perf.o 00:03:13.343 CXX test/cpp_headers/conf.o 00:03:13.343 LINK hello_fsdev 00:03:13.601 CC app/vhost/vhost.o 00:03:13.601 LINK reactor_perf 00:03:13.601 CXX test/cpp_headers/config.o 00:03:13.601 LINK spdk_nvme_discover 00:03:13.601 CXX test/cpp_headers/cpuset.o 00:03:13.601 CC test/nvme/sgl/sgl.o 00:03:13.601 LINK vhost 00:03:13.601 CXX test/cpp_headers/crc16.o 00:03:13.601 CC test/event/app_repeat/app_repeat.o 00:03:13.858 CC examples/accel/perf/accel_perf.o 00:03:13.858 CC app/spdk_dd/spdk_dd.o 00:03:13.858 CXX test/cpp_headers/crc32.o 00:03:13.858 LINK sgl 00:03:13.858 LINK app_repeat 00:03:13.858 CC app/fio/nvme/fio_plugin.o 00:03:13.858 CXX test/cpp_headers/crc64.o 00:03:13.858 LINK spdk_nvme_perf 00:03:14.116 CXX test/cpp_headers/dif.o 00:03:14.116 CC test/nvme/e2edp/nvme_dp.o 00:03:14.116 CC test/event/scheduler/scheduler.o 00:03:14.116 LINK spdk_nvme_identify 00:03:14.116 LINK iscsi_fuzz 00:03:14.116 CXX test/cpp_headers/dma.o 00:03:14.116 LINK spdk_dd 00:03:14.374 LINK spdk_top 00:03:14.374 LINK accel_perf 00:03:14.374 CXX test/cpp_headers/endian.o 00:03:14.374 LINK scheduler 00:03:14.374 CC examples/blob/hello_world/hello_blob.o 00:03:14.374 CXX test/cpp_headers/env_dpdk.o 00:03:14.374 LINK spdk_nvme 00:03:14.374 CC examples/nvme/hello_world/hello_world.o 00:03:14.375 CC examples/blob/cli/blobcli.o 00:03:14.375 LINK nvme_dp 00:03:14.375 CXX test/cpp_headers/env.o 00:03:14.375 CXX test/cpp_headers/event.o 00:03:14.375 CXX test/cpp_headers/fd_group.o 00:03:14.632 CC app/fio/bdev/fio_plugin.o 00:03:14.633 CC test/nvme/overhead/overhead.o 00:03:14.633 LINK hello_blob 00:03:14.633 CXX test/cpp_headers/fd.o 00:03:14.633 CXX test/cpp_headers/file.o 00:03:14.633 LINK hello_world 00:03:14.633 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.633 CC examples/nvme/reconnect/reconnect.o 00:03:14.633 CC test/nvme/err_injection/err_injection.o 00:03:14.633 CXX test/cpp_headers/fsdev.o 00:03:14.633 CXX test/cpp_headers/fsdev_module.o 00:03:14.890 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.890 CC test/nvme/startup/startup.o 00:03:14.890 LINK overhead 00:03:14.890 LINK err_injection 00:03:14.890 LINK hello_bdev 00:03:14.890 CXX test/cpp_headers/ftl.o 00:03:14.890 LINK blobcli 00:03:14.890 LINK reconnect 00:03:14.890 CC test/nvme/reserve/reserve.o 00:03:14.890 LINK startup 00:03:14.890 CC test/nvme/simple_copy/simple_copy.o 00:03:15.148 LINK spdk_bdev 00:03:15.148 CXX test/cpp_headers/fuse_dispatcher.o 00:03:15.148 CC test/nvme/connect_stress/connect_stress.o 00:03:15.148 CXX test/cpp_headers/gpt_spec.o 00:03:15.148 LINK reserve 00:03:15.148 CC examples/bdev/bdevperf/bdevperf.o 00:03:15.148 CC examples/nvme/arbitration/arbitration.o 00:03:15.148 CC examples/nvme/hotplug/hotplug.o 00:03:15.148 CXX test/cpp_headers/hexlify.o 00:03:15.148 LINK simple_copy 00:03:15.148 CC test/nvme/boot_partition/boot_partition.o 00:03:15.148 LINK connect_stress 00:03:15.148 LINK nvme_manage 00:03:15.412 CC test/nvme/compliance/nvme_compliance.o 00:03:15.412 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.412 CXX test/cpp_headers/histogram_data.o 00:03:15.412 CXX test/cpp_headers/idxd.o 00:03:15.412 CXX test/cpp_headers/idxd_spec.o 00:03:15.412 LINK boot_partition 00:03:15.412 LINK hotplug 00:03:15.412 LINK arbitration 00:03:15.412 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.670 CXX test/cpp_headers/init.o 00:03:15.670 LINK fused_ordering 00:03:15.670 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.670 CC examples/nvme/abort/abort.o 00:03:15.670 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.670 LINK nvme_compliance 00:03:15.670 CC test/nvme/fdp/fdp.o 00:03:15.670 LINK doorbell_aers 00:03:15.670 CC test/nvme/cuse/cuse.o 00:03:15.670 CXX test/cpp_headers/ioat.o 00:03:15.670 CXX test/cpp_headers/ioat_spec.o 00:03:15.670 LINK cmb_copy 00:03:15.670 CXX test/cpp_headers/iscsi_spec.o 00:03:15.670 CXX test/cpp_headers/json.o 00:03:15.670 LINK pmr_persistence 00:03:15.928 CXX test/cpp_headers/jsonrpc.o 00:03:15.928 LINK bdevperf 00:03:15.928 CXX test/cpp_headers/keyring.o 00:03:15.928 CXX test/cpp_headers/keyring_module.o 00:03:15.928 LINK abort 00:03:15.928 LINK fdp 00:03:15.928 CXX test/cpp_headers/likely.o 00:03:15.928 CC test/accel/dif/dif.o 00:03:15.928 CXX test/cpp_headers/log.o 00:03:15.928 CC test/blobfs/mkfs/mkfs.o 00:03:15.928 CXX test/cpp_headers/lvol.o 00:03:15.928 CXX test/cpp_headers/md5.o 00:03:16.185 CXX test/cpp_headers/memory.o 00:03:16.185 CXX test/cpp_headers/mmio.o 00:03:16.185 CC test/lvol/esnap/esnap.o 00:03:16.185 CXX test/cpp_headers/nbd.o 00:03:16.185 CXX test/cpp_headers/net.o 00:03:16.185 CXX test/cpp_headers/notify.o 00:03:16.185 LINK mkfs 00:03:16.185 CXX test/cpp_headers/nvme.o 00:03:16.185 CXX test/cpp_headers/nvme_intel.o 00:03:16.185 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.185 CC examples/nvmf/nvmf/nvmf.o 00:03:16.185 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.185 CXX test/cpp_headers/nvme_spec.o 00:03:16.442 CXX test/cpp_headers/nvme_zns.o 00:03:16.442 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.442 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.442 CXX test/cpp_headers/nvmf.o 00:03:16.442 CXX test/cpp_headers/nvmf_spec.o 00:03:16.442 CXX test/cpp_headers/nvmf_transport.o 00:03:16.442 CXX test/cpp_headers/opal.o 00:03:16.442 CXX test/cpp_headers/opal_spec.o 00:03:16.442 LINK nvmf 00:03:16.442 CXX test/cpp_headers/pci_ids.o 00:03:16.442 CXX test/cpp_headers/pipe.o 00:03:16.700 CXX test/cpp_headers/queue.o 00:03:16.700 CXX test/cpp_headers/reduce.o 00:03:16.700 CXX test/cpp_headers/rpc.o 00:03:16.700 CXX test/cpp_headers/scheduler.o 00:03:16.700 CXX test/cpp_headers/scsi.o 00:03:16.700 LINK dif 00:03:16.700 CXX test/cpp_headers/scsi_spec.o 00:03:16.700 CXX test/cpp_headers/sock.o 00:03:16.700 CXX test/cpp_headers/stdinc.o 00:03:16.700 CXX test/cpp_headers/string.o 00:03:16.700 CXX test/cpp_headers/thread.o 00:03:16.700 CXX test/cpp_headers/trace.o 00:03:16.700 CXX test/cpp_headers/trace_parser.o 00:03:16.700 CXX test/cpp_headers/tree.o 00:03:16.700 CXX test/cpp_headers/ublk.o 00:03:16.700 CXX test/cpp_headers/util.o 00:03:16.957 CXX test/cpp_headers/uuid.o 00:03:16.957 CXX test/cpp_headers/version.o 00:03:16.957 CXX test/cpp_headers/vfio_user_pci.o 00:03:16.957 LINK cuse 00:03:16.957 CXX test/cpp_headers/vfio_user_spec.o 00:03:16.957 CXX test/cpp_headers/vhost.o 00:03:16.957 CXX test/cpp_headers/vmd.o 00:03:16.957 CXX test/cpp_headers/xor.o 00:03:16.957 CXX test/cpp_headers/zipf.o 00:03:16.957 CC test/bdev/bdevio/bdevio.o 00:03:17.522 LINK bdevio 00:03:20.804 LINK esnap 00:03:21.061 00:03:21.061 real 1m11.942s 00:03:21.061 user 6m33.208s 00:03:21.061 sys 1m10.058s 00:03:21.061 10:02:27 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:21.061 10:02:27 make -- common/autotest_common.sh@10 -- $ set +x 00:03:21.061 ************************************ 00:03:21.061 END TEST make 00:03:21.061 ************************************ 00:03:21.319 10:02:27 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:21.319 10:02:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:21.319 10:02:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:21.319 10:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.319 10:02:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:21.319 10:02:27 -- pm/common@44 -- $ pid=5075 00:03:21.319 10:02:27 -- pm/common@50 -- $ kill -TERM 5075 00:03:21.319 10:02:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.319 10:02:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:21.319 10:02:27 -- pm/common@44 -- $ pid=5076 00:03:21.319 10:02:27 -- pm/common@50 -- $ kill -TERM 5076 00:03:21.319 10:02:27 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:21.319 10:02:27 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:21.319 10:02:27 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:21.319 10:02:27 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:21.319 10:02:27 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:21.319 10:02:27 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:21.319 10:02:27 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.319 10:02:27 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.319 10:02:27 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.319 10:02:27 -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.319 10:02:27 -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.319 10:02:27 -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.319 10:02:27 -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.319 10:02:27 -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.319 10:02:27 -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.319 10:02:27 -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.319 10:02:27 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.319 10:02:27 -- scripts/common.sh@344 -- # case "$op" in 00:03:21.319 10:02:27 -- scripts/common.sh@345 -- # : 1 00:03:21.319 10:02:27 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.319 10:02:27 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.319 10:02:27 -- scripts/common.sh@365 -- # decimal 1 00:03:21.319 10:02:27 -- scripts/common.sh@353 -- # local d=1 00:03:21.319 10:02:27 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.319 10:02:27 -- scripts/common.sh@355 -- # echo 1 00:03:21.319 10:02:27 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.319 10:02:27 -- scripts/common.sh@366 -- # decimal 2 00:03:21.319 10:02:27 -- scripts/common.sh@353 -- # local d=2 00:03:21.319 10:02:27 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.319 10:02:27 -- scripts/common.sh@355 -- # echo 2 00:03:21.319 10:02:27 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.319 10:02:27 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.319 10:02:27 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.319 10:02:27 -- scripts/common.sh@368 -- # return 0 00:03:21.319 10:02:27 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.319 10:02:27 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.319 --rc genhtml_branch_coverage=1 00:03:21.319 --rc genhtml_function_coverage=1 00:03:21.319 --rc genhtml_legend=1 00:03:21.319 --rc geninfo_all_blocks=1 00:03:21.319 --rc geninfo_unexecuted_blocks=1 00:03:21.319 00:03:21.319 ' 00:03:21.319 10:02:27 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:21.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.320 --rc genhtml_branch_coverage=1 00:03:21.320 --rc genhtml_function_coverage=1 00:03:21.320 --rc genhtml_legend=1 00:03:21.320 --rc geninfo_all_blocks=1 00:03:21.320 --rc geninfo_unexecuted_blocks=1 00:03:21.320 00:03:21.320 ' 00:03:21.320 10:02:27 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:21.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.320 --rc genhtml_branch_coverage=1 00:03:21.320 --rc genhtml_function_coverage=1 00:03:21.320 --rc genhtml_legend=1 00:03:21.320 --rc geninfo_all_blocks=1 00:03:21.320 --rc geninfo_unexecuted_blocks=1 00:03:21.320 00:03:21.320 ' 00:03:21.320 10:02:27 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:21.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.320 --rc genhtml_branch_coverage=1 00:03:21.320 --rc genhtml_function_coverage=1 00:03:21.320 --rc genhtml_legend=1 00:03:21.320 --rc geninfo_all_blocks=1 00:03:21.320 --rc geninfo_unexecuted_blocks=1 00:03:21.320 00:03:21.320 ' 00:03:21.320 10:02:27 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:21.320 10:02:27 -- nvmf/common.sh@7 -- # uname -s 00:03:21.320 10:02:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:21.320 10:02:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:21.320 10:02:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:21.320 10:02:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:21.320 10:02:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:21.320 10:02:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:21.320 10:02:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:21.320 10:02:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:21.320 10:02:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:21.320 10:02:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:21.320 10:02:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:03:21.320 10:02:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:03:21.320 10:02:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:21.320 10:02:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:21.320 10:02:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:21.320 10:02:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:21.320 10:02:27 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:21.320 10:02:27 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:21.320 10:02:27 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:21.320 10:02:27 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:21.320 10:02:27 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:21.320 10:02:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.320 10:02:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.320 10:02:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.320 10:02:27 -- paths/export.sh@5 -- # export PATH 00:03:21.320 10:02:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:21.320 10:02:27 -- nvmf/common.sh@51 -- # : 0 00:03:21.320 10:02:27 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:21.320 10:02:27 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:21.320 10:02:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:21.320 10:02:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:21.320 10:02:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:21.320 10:02:27 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:21.320 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:21.320 10:02:27 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:21.320 10:02:27 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:21.320 10:02:27 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:21.320 10:02:27 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:21.320 10:02:27 -- spdk/autotest.sh@32 -- # uname -s 00:03:21.320 10:02:27 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:21.320 10:02:27 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:21.320 10:02:27 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.320 10:02:27 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:21.320 10:02:27 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:21.320 10:02:27 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:21.320 10:02:27 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:21.320 10:02:27 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:21.320 10:02:27 -- spdk/autotest.sh@48 -- # udevadm_pid=54331 00:03:21.320 10:02:27 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:21.320 10:02:27 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:21.320 10:02:27 -- pm/common@17 -- # local monitor 00:03:21.320 10:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.320 10:02:27 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:21.320 10:02:27 -- pm/common@25 -- # sleep 1 00:03:21.320 10:02:27 -- pm/common@21 -- # date +%s 00:03:21.320 10:02:27 -- pm/common@21 -- # date +%s 00:03:21.320 10:02:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733479347 00:03:21.320 10:02:27 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733479347 00:03:21.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733479347_collect-cpu-load.pm.log 00:03:21.320 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733479347_collect-vmstat.pm.log 00:03:22.694 10:02:28 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:22.694 10:02:28 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:22.694 10:02:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:22.694 10:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:22.694 10:02:28 -- spdk/autotest.sh@59 -- # create_test_list 00:03:22.694 10:02:28 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:22.694 10:02:28 -- common/autotest_common.sh@10 -- # set +x 00:03:22.694 10:02:28 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:22.694 10:02:28 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:22.694 10:02:28 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:22.694 10:02:28 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:22.694 10:02:28 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:22.694 10:02:28 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:22.694 10:02:28 -- common/autotest_common.sh@1457 -- # uname 00:03:22.694 10:02:28 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:22.694 10:02:28 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:22.694 10:02:28 -- common/autotest_common.sh@1477 -- # uname 00:03:22.694 10:02:28 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:22.694 10:02:28 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:22.694 10:02:28 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:22.694 lcov: LCOV version 1.15 00:03:22.694 10:02:28 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:37.667 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:37.667 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:55.811 10:02:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:55.811 10:02:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.811 10:02:59 -- common/autotest_common.sh@10 -- # set +x 00:03:55.811 10:02:59 -- spdk/autotest.sh@78 -- # rm -f 00:03:55.811 10:02:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.811 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:55.811 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.811 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:55.811 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:55.811 10:03:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.811 10:03:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.811 10:03:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.811 10:03:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:55.811 10:03:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:55.811 10:03:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:55.811 10:03:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:55.811 10:03:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.811 10:03:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:55.811 10:03:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:55.811 10:03:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.811 10:03:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.811 10:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.811 10:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.811 10:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.811 10:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.811 10:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.811 No valid GPT data, bailing 00:03:55.811 10:03:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.811 10:03:00 -- scripts/common.sh@394 -- # pt= 00:03:55.811 10:03:00 -- scripts/common.sh@395 -- # return 1 00:03:55.811 10:03:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.811 1+0 records in 00:03:55.811 1+0 records out 00:03:55.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289084 s, 36.3 MB/s 00:03:55.811 10:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.811 10:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.811 10:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:55.811 10:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:55.811 10:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.811 No valid GPT data, bailing 00:03:55.811 10:03:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.811 10:03:00 -- scripts/common.sh@394 -- # pt= 00:03:55.811 10:03:00 -- scripts/common.sh@395 -- # return 1 00:03:55.811 10:03:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.811 1+0 records in 00:03:55.811 1+0 records out 00:03:55.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00663738 s, 158 MB/s 00:03:55.812 10:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.812 10:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.812 10:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:55.812 10:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:55.812 10:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:55.812 No valid GPT data, bailing 00:03:55.812 10:03:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:55.812 10:03:00 -- scripts/common.sh@394 -- # pt= 00:03:55.812 10:03:00 -- scripts/common.sh@395 -- # return 1 00:03:55.812 10:03:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:55.812 1+0 records in 00:03:55.812 1+0 records out 00:03:55.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678985 s, 154 MB/s 00:03:55.812 10:03:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.812 10:03:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.812 10:03:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:03:55.812 10:03:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:03:55.812 10:03:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:55.812 No valid GPT data, bailing 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # pt= 00:03:55.812 10:03:01 -- scripts/common.sh@395 -- # return 1 00:03:55.812 10:03:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:55.812 1+0 records in 00:03:55.812 1+0 records out 00:03:55.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649071 s, 162 MB/s 00:03:55.812 10:03:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.812 10:03:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.812 10:03:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:03:55.812 10:03:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:03:55.812 10:03:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:55.812 No valid GPT data, bailing 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # pt= 00:03:55.812 10:03:01 -- scripts/common.sh@395 -- # return 1 00:03:55.812 10:03:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:55.812 1+0 records in 00:03:55.812 1+0 records out 00:03:55.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490637 s, 214 MB/s 00:03:55.812 10:03:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.812 10:03:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.812 10:03:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:55.812 10:03:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:55.812 10:03:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:55.812 No valid GPT data, bailing 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:55.812 10:03:01 -- scripts/common.sh@394 -- # pt= 00:03:55.812 10:03:01 -- scripts/common.sh@395 -- # return 1 00:03:55.812 10:03:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:55.812 1+0 records in 00:03:55.812 1+0 records out 00:03:55.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661016 s, 159 MB/s 00:03:55.812 10:03:01 -- spdk/autotest.sh@105 -- # sync 00:03:55.812 10:03:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.812 10:03:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.812 10:03:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.193 10:03:03 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.194 10:03:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.194 10:03:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.194 10:03:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.024 Hugepages 00:03:58.024 node hugesize free / total 00:03:58.024 node0 1048576kB 0 / 0 00:03:58.024 node0 2048kB 0 / 0 00:03:58.024 00:03:58.024 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.024 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:58.287 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:58.287 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:58.287 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:58.287 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:58.287 10:03:04 -- spdk/autotest.sh@117 -- # uname -s 00:03:58.287 10:03:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:58.287 10:03:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:58.287 10:03:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.526 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.526 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.526 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:03:59.526 10:03:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:00.912 10:03:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:00.912 10:03:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:00.912 10:03:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.912 10:03:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:00.912 10:03:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:00.912 10:03:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:00.912 10:03:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.912 10:03:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:00.912 10:03:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:00.912 10:03:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:00.912 10:03:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:00.912 10:03:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.174 Waiting for block devices as requested 00:04:01.174 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.174 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.436 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.436 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.748 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:06.748 10:03:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.748 10:03:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1543 -- # continue 00:04:06.748 10:03:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.748 10:03:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1543 -- # continue 00:04:06.748 10:03:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.748 10:03:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.748 10:03:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.748 10:03:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1543 -- # continue 00:04:06.748 10:03:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.748 10:03:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:06.748 10:03:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:06.748 10:03:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:06.749 10:03:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:06.749 10:03:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:06.749 10:03:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.749 10:03:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:06.749 10:03:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.749 10:03:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.749 10:03:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.749 10:03:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.749 10:03:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:06.749 10:03:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.749 10:03:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.749 10:03:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.749 10:03:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.749 10:03:12 -- common/autotest_common.sh@1543 -- # continue 00:04:06.749 10:03:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:06.749 10:03:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.749 10:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:06.749 10:03:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:06.749 10:03:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.749 10:03:12 -- common/autotest_common.sh@10 -- # set +x 00:04:06.749 10:03:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.893 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.893 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.893 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.893 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.154 10:03:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.154 10:03:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.154 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.154 10:03:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.154 10:03:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.154 10:03:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.154 10:03:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.154 10:03:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.154 10:03:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.154 10:03:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.154 10:03:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.154 10:03:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.154 10:03:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.154 10:03:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.154 10:03:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.154 10:03:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.154 10:03:14 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:08.154 10:03:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:08.154 10:03:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.154 10:03:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:08.154 10:03:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.154 10:03:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.154 10:03:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.154 10:03:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:08.154 10:03:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.154 10:03:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.154 10:03:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.155 10:03:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:08.155 10:03:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.155 10:03:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.155 10:03:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.155 10:03:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:08.155 10:03:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.155 10:03:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.155 10:03:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:08.155 10:03:14 -- common/autotest_common.sh@1572 -- # return 0 00:04:08.155 10:03:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:08.155 10:03:14 -- common/autotest_common.sh@1580 -- # return 0 00:04:08.155 10:03:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.155 10:03:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.155 10:03:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.155 10:03:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.155 10:03:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.155 10:03:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.155 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.155 10:03:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.155 10:03:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.155 10:03:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.155 10:03:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.155 10:03:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.155 ************************************ 00:04:08.155 START TEST env 00:04:08.155 ************************************ 00:04:08.155 10:03:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.155 * Looking for test storage... 00:04:08.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.416 10:03:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.416 10:03:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.416 10:03:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.416 10:03:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.416 10:03:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.416 10:03:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.416 10:03:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.416 10:03:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.416 10:03:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.416 10:03:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.416 10:03:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.416 10:03:14 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.416 10:03:14 env -- scripts/common.sh@345 -- # : 1 00:04:08.416 10:03:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.416 10:03:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.416 10:03:14 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.416 10:03:14 env -- scripts/common.sh@353 -- # local d=1 00:04:08.416 10:03:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.416 10:03:14 env -- scripts/common.sh@355 -- # echo 1 00:04:08.416 10:03:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.416 10:03:14 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.416 10:03:14 env -- scripts/common.sh@353 -- # local d=2 00:04:08.416 10:03:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.416 10:03:14 env -- scripts/common.sh@355 -- # echo 2 00:04:08.416 10:03:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.416 10:03:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.416 10:03:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.416 10:03:14 env -- scripts/common.sh@368 -- # return 0 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.416 --rc genhtml_branch_coverage=1 00:04:08.416 --rc genhtml_function_coverage=1 00:04:08.416 --rc genhtml_legend=1 00:04:08.416 --rc geninfo_all_blocks=1 00:04:08.416 --rc geninfo_unexecuted_blocks=1 00:04:08.416 00:04:08.416 ' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.416 --rc genhtml_branch_coverage=1 00:04:08.416 --rc genhtml_function_coverage=1 00:04:08.416 --rc genhtml_legend=1 00:04:08.416 --rc geninfo_all_blocks=1 00:04:08.416 --rc geninfo_unexecuted_blocks=1 00:04:08.416 00:04:08.416 ' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.416 --rc genhtml_branch_coverage=1 00:04:08.416 --rc genhtml_function_coverage=1 00:04:08.416 --rc genhtml_legend=1 00:04:08.416 --rc geninfo_all_blocks=1 00:04:08.416 --rc geninfo_unexecuted_blocks=1 00:04:08.416 00:04:08.416 ' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.416 --rc genhtml_branch_coverage=1 00:04:08.416 --rc genhtml_function_coverage=1 00:04:08.416 --rc genhtml_legend=1 00:04:08.416 --rc geninfo_all_blocks=1 00:04:08.416 --rc geninfo_unexecuted_blocks=1 00:04:08.416 00:04:08.416 ' 00:04:08.416 10:03:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.416 10:03:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.416 10:03:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.416 ************************************ 00:04:08.416 START TEST env_memory 00:04:08.416 ************************************ 00:04:08.416 10:03:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.416 00:04:08.416 00:04:08.416 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.416 http://cunit.sourceforge.net/ 00:04:08.416 00:04:08.416 00:04:08.416 Suite: memory 00:04:08.416 Test: alloc and free memory map ...[2024-12-06 10:03:14.481891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.416 passed 00:04:08.416 Test: mem map translation ...[2024-12-06 10:03:14.523817] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.416 [2024-12-06 10:03:14.523917] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.416 [2024-12-06 10:03:14.523986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.416 [2024-12-06 10:03:14.524005] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.677 passed 00:04:08.677 Test: mem map registration ...[2024-12-06 10:03:14.592711] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.677 [2024-12-06 10:03:14.592796] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.677 passed 00:04:08.677 Test: mem map adjacent registrations ...passed 00:04:08.677 00:04:08.677 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.677 suites 1 1 n/a 0 0 00:04:08.677 tests 4 4 4 0 0 00:04:08.677 asserts 152 152 152 0 n/a 00:04:08.677 00:04:08.677 Elapsed time = 0.238 seconds 00:04:08.677 00:04:08.677 real 0m0.277s 00:04:08.677 user 0m0.239s 00:04:08.677 sys 0m0.028s 00:04:08.677 10:03:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.677 ************************************ 00:04:08.677 END TEST env_memory 00:04:08.677 ************************************ 00:04:08.677 10:03:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 10:03:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.677 10:03:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.677 10:03:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.677 10:03:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 ************************************ 00:04:08.677 START TEST env_vtophys 00:04:08.677 ************************************ 00:04:08.677 10:03:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.677 EAL: lib.eal log level changed from notice to debug 00:04:08.677 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.677 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.677 EAL: Maximum logical cores by configuration: 128 00:04:08.677 EAL: Detected CPU lcores: 10 00:04:08.677 EAL: Detected NUMA nodes: 1 00:04:08.677 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.677 EAL: Detected shared linkage of DPDK 00:04:08.677 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.677 EAL: Selected IOVA mode 'PA' 00:04:08.677 EAL: Probing VFIO support... 00:04:08.677 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.677 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:08.677 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.677 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.677 EAL: Setting up physically contiguous memory... 00:04:08.677 EAL: Setting maximum number of open files to 524288 00:04:08.677 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.677 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.677 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.677 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.677 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.677 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.677 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.677 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.677 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.677 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.677 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.677 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.677 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.677 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.677 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.677 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.677 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.677 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.677 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.677 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.677 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.677 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.677 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.677 EAL: Hugepages will be freed exactly as allocated. 00:04:08.677 EAL: No shared files mode enabled, IPC is disabled 00:04:08.677 EAL: No shared files mode enabled, IPC is disabled 00:04:08.937 EAL: TSC frequency is ~2600000 KHz 00:04:08.937 EAL: Main lcore 0 is ready (tid=7f4c31617a40;cpuset=[0]) 00:04:08.937 EAL: Trying to obtain current memory policy. 00:04:08.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.937 EAL: Restoring previous memory policy: 0 00:04:08.937 EAL: request: mp_malloc_sync 00:04:08.937 EAL: No shared files mode enabled, IPC is disabled 00:04:08.937 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.937 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.937 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.937 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.937 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:08.937 00:04:08.937 00:04:08.937 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.937 http://cunit.sourceforge.net/ 00:04:08.937 00:04:08.937 00:04:08.937 Suite: components_suite 00:04:09.198 Test: vtophys_malloc_test ...passed 00:04:09.198 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.198 EAL: Restoring previous memory policy: 4 00:04:09.198 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.198 EAL: request: mp_malloc_sync 00:04:09.198 EAL: No shared files mode enabled, IPC is disabled 00:04:09.198 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.198 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.198 EAL: request: mp_malloc_sync 00:04:09.198 EAL: No shared files mode enabled, IPC is disabled 00:04:09.198 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.198 EAL: Trying to obtain current memory policy. 00:04:09.198 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.198 EAL: Restoring previous memory policy: 4 00:04:09.198 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.198 EAL: request: mp_malloc_sync 00:04:09.198 EAL: No shared files mode enabled, IPC is disabled 00:04:09.198 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.456 EAL: Trying to obtain current memory policy. 00:04:09.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.456 EAL: Restoring previous memory policy: 4 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.456 EAL: Trying to obtain current memory policy. 00:04:09.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.456 EAL: Restoring previous memory policy: 4 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.456 EAL: Trying to obtain current memory policy. 00:04:09.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.456 EAL: Restoring previous memory policy: 4 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.456 EAL: request: mp_malloc_sync 00:04:09.456 EAL: No shared files mode enabled, IPC is disabled 00:04:09.456 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.456 EAL: Trying to obtain current memory policy. 00:04:09.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.456 EAL: Restoring previous memory policy: 4 00:04:09.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.457 EAL: request: mp_malloc_sync 00:04:09.457 EAL: No shared files mode enabled, IPC is disabled 00:04:09.457 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.457 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.457 EAL: request: mp_malloc_sync 00:04:09.457 EAL: No shared files mode enabled, IPC is disabled 00:04:09.457 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.718 EAL: Trying to obtain current memory policy. 00:04:09.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.718 EAL: Restoring previous memory policy: 4 00:04:09.718 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.718 EAL: request: mp_malloc_sync 00:04:09.718 EAL: No shared files mode enabled, IPC is disabled 00:04:09.718 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.718 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.978 EAL: request: mp_malloc_sync 00:04:09.978 EAL: No shared files mode enabled, IPC is disabled 00:04:09.978 EAL: Heap on socket 0 was shrunk by 130MB 00:04:09.978 EAL: Trying to obtain current memory policy. 00:04:09.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.978 EAL: Restoring previous memory policy: 4 00:04:09.978 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.978 EAL: request: mp_malloc_sync 00:04:09.978 EAL: No shared files mode enabled, IPC is disabled 00:04:09.978 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.239 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.499 EAL: request: mp_malloc_sync 00:04:10.499 EAL: No shared files mode enabled, IPC is disabled 00:04:10.499 EAL: Heap on socket 0 was shrunk by 258MB 00:04:10.759 EAL: Trying to obtain current memory policy. 00:04:10.759 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.759 EAL: Restoring previous memory policy: 4 00:04:10.759 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.759 EAL: request: mp_malloc_sync 00:04:10.759 EAL: No shared files mode enabled, IPC is disabled 00:04:10.759 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.590 EAL: request: mp_malloc_sync 00:04:11.590 EAL: No shared files mode enabled, IPC is disabled 00:04:11.590 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.160 EAL: Trying to obtain current memory policy. 00:04:12.160 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.422 EAL: Restoring previous memory policy: 4 00:04:12.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.422 EAL: request: mp_malloc_sync 00:04:12.422 EAL: No shared files mode enabled, IPC is disabled 00:04:12.422 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.810 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.810 EAL: request: mp_malloc_sync 00:04:13.810 EAL: No shared files mode enabled, IPC is disabled 00:04:13.810 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.750 passed 00:04:14.750 00:04:14.750 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.750 suites 1 1 n/a 0 0 00:04:14.750 tests 2 2 2 0 0 00:04:14.750 asserts 5817 5817 5817 0 n/a 00:04:14.750 00:04:14.750 Elapsed time = 5.841 seconds 00:04:14.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.750 EAL: request: mp_malloc_sync 00:04:14.750 EAL: No shared files mode enabled, IPC is disabled 00:04:14.750 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.750 EAL: No shared files mode enabled, IPC is disabled 00:04:14.750 EAL: No shared files mode enabled, IPC is disabled 00:04:14.750 EAL: No shared files mode enabled, IPC is disabled 00:04:14.750 00:04:14.750 real 0m6.133s 00:04:14.750 user 0m5.004s 00:04:14.750 sys 0m0.961s 00:04:14.750 10:03:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.750 ************************************ 00:04:14.750 END TEST env_vtophys 00:04:14.750 ************************************ 00:04:14.750 10:03:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:15.011 10:03:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.011 10:03:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.011 10:03:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.011 10:03:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.011 ************************************ 00:04:15.011 START TEST env_pci 00:04:15.011 ************************************ 00:04:15.011 10:03:20 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:15.011 00:04:15.011 00:04:15.011 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.011 http://cunit.sourceforge.net/ 00:04:15.011 00:04:15.011 00:04:15.011 Suite: pci 00:04:15.012 Test: pci_hook ...[2024-12-06 10:03:21.017034] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57131 has claimed it 00:04:15.012 passed 00:04:15.012 00:04:15.012 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.012 suites 1 1 n/a 0 0 00:04:15.012 tests 1 1 1 0 0 00:04:15.012 asserts 25 25 25 0 n/a 00:04:15.012 00:04:15.012 Elapsed time = 0.007 seconds 00:04:15.012 EAL: Cannot find device (10000:00:01.0) 00:04:15.012 EAL: Failed to attach device on primary process 00:04:15.012 00:04:15.012 real 0m0.069s 00:04:15.012 user 0m0.029s 00:04:15.012 sys 0m0.039s 00:04:15.012 10:03:21 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.012 ************************************ 00:04:15.012 END TEST env_pci 00:04:15.012 ************************************ 00:04:15.012 10:03:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.012 10:03:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.012 10:03:21 env -- env/env.sh@15 -- # uname 00:04:15.012 10:03:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.012 10:03:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.012 10:03:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.012 10:03:21 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:15.012 10:03:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.012 10:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.012 ************************************ 00:04:15.012 START TEST env_dpdk_post_init 00:04:15.012 ************************************ 00:04:15.012 10:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.273 EAL: Detected CPU lcores: 10 00:04:15.273 EAL: Detected NUMA nodes: 1 00:04:15.273 EAL: Detected shared linkage of DPDK 00:04:15.273 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.273 EAL: Selected IOVA mode 'PA' 00:04:15.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:15.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:15.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:15.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:15.273 Starting DPDK initialization... 00:04:15.273 Starting SPDK post initialization... 00:04:15.273 SPDK NVMe probe 00:04:15.273 Attaching to 0000:00:10.0 00:04:15.273 Attaching to 0000:00:11.0 00:04:15.273 Attaching to 0000:00:12.0 00:04:15.273 Attaching to 0000:00:13.0 00:04:15.273 Attached to 0000:00:10.0 00:04:15.273 Attached to 0000:00:11.0 00:04:15.273 Attached to 0000:00:13.0 00:04:15.273 Attached to 0000:00:12.0 00:04:15.273 Cleaning up... 00:04:15.273 00:04:15.273 real 0m0.270s 00:04:15.273 user 0m0.086s 00:04:15.273 sys 0m0.086s 00:04:15.273 10:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.273 ************************************ 00:04:15.273 END TEST env_dpdk_post_init 00:04:15.273 ************************************ 00:04:15.273 10:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.533 10:03:21 env -- env/env.sh@26 -- # uname 00:04:15.533 10:03:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:15.533 10:03:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.533 10:03:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.533 10:03:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.533 10:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.533 ************************************ 00:04:15.533 START TEST env_mem_callbacks 00:04:15.533 ************************************ 00:04:15.533 10:03:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.533 EAL: Detected CPU lcores: 10 00:04:15.533 EAL: Detected NUMA nodes: 1 00:04:15.533 EAL: Detected shared linkage of DPDK 00:04:15.533 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.533 EAL: Selected IOVA mode 'PA' 00:04:15.533 00:04:15.533 00:04:15.533 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.533 http://cunit.sourceforge.net/ 00:04:15.533 00:04:15.533 00:04:15.533 Suite: memory 00:04:15.533 Test: test ... 00:04:15.533 register 0x200000200000 2097152 00:04:15.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.533 malloc 3145728 00:04:15.533 register 0x200000400000 4194304 00:04:15.533 buf 0x2000004fffc0 len 3145728 PASSED 00:04:15.533 malloc 64 00:04:15.533 buf 0x2000004ffec0 len 64 PASSED 00:04:15.533 malloc 4194304 00:04:15.533 register 0x200000800000 6291456 00:04:15.533 buf 0x2000009fffc0 len 4194304 PASSED 00:04:15.533 free 0x2000004fffc0 3145728 00:04:15.533 free 0x2000004ffec0 64 00:04:15.533 unregister 0x200000400000 4194304 PASSED 00:04:15.533 free 0x2000009fffc0 4194304 00:04:15.533 unregister 0x200000800000 6291456 PASSED 00:04:15.533 malloc 8388608 00:04:15.533 register 0x200000400000 10485760 00:04:15.533 buf 0x2000005fffc0 len 8388608 PASSED 00:04:15.533 free 0x2000005fffc0 8388608 00:04:15.533 unregister 0x200000400000 10485760 PASSED 00:04:15.793 passed 00:04:15.793 00:04:15.793 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.793 suites 1 1 n/a 0 0 00:04:15.793 tests 1 1 1 0 0 00:04:15.793 asserts 15 15 15 0 n/a 00:04:15.793 00:04:15.793 Elapsed time = 0.051 seconds 00:04:15.793 00:04:15.793 real 0m0.230s 00:04:15.793 user 0m0.066s 00:04:15.793 sys 0m0.061s 00:04:15.793 10:03:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.793 ************************************ 00:04:15.793 END TEST env_mem_callbacks 00:04:15.793 ************************************ 00:04:15.793 10:03:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.793 00:04:15.793 real 0m7.531s 00:04:15.793 user 0m5.580s 00:04:15.793 sys 0m1.432s 00:04:15.793 10:03:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.793 ************************************ 00:04:15.793 END TEST env 00:04:15.793 ************************************ 00:04:15.793 10:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.793 10:03:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.793 10:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.794 10:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.794 10:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.794 ************************************ 00:04:15.794 START TEST rpc 00:04:15.794 ************************************ 00:04:15.794 10:03:21 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.794 * Looking for test storage... 00:04:15.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.794 10:03:21 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.794 10:03:21 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.794 10:03:21 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.054 10:03:21 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.054 10:03:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.054 10:03:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.054 10:03:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.054 10:03:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.054 10:03:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.054 10:03:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.054 10:03:21 rpc -- scripts/common.sh@345 -- # : 1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.054 10:03:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.054 10:03:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.054 10:03:21 rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.054 10:03:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.054 10:03:21 rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.054 10:03:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.054 10:03:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.054 10:03:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.054 10:03:22 rpc -- scripts/common.sh@368 -- # return 0 00:04:16.054 10:03:22 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.054 10:03:22 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.054 --rc genhtml_branch_coverage=1 00:04:16.054 --rc genhtml_function_coverage=1 00:04:16.054 --rc genhtml_legend=1 00:04:16.054 --rc geninfo_all_blocks=1 00:04:16.054 --rc geninfo_unexecuted_blocks=1 00:04:16.054 00:04:16.054 ' 00:04:16.054 10:03:22 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.054 --rc genhtml_branch_coverage=1 00:04:16.054 --rc genhtml_function_coverage=1 00:04:16.055 --rc genhtml_legend=1 00:04:16.055 --rc geninfo_all_blocks=1 00:04:16.055 --rc geninfo_unexecuted_blocks=1 00:04:16.055 00:04:16.055 ' 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.055 --rc genhtml_branch_coverage=1 00:04:16.055 --rc genhtml_function_coverage=1 00:04:16.055 --rc genhtml_legend=1 00:04:16.055 --rc geninfo_all_blocks=1 00:04:16.055 --rc geninfo_unexecuted_blocks=1 00:04:16.055 00:04:16.055 ' 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.055 --rc genhtml_branch_coverage=1 00:04:16.055 --rc genhtml_function_coverage=1 00:04:16.055 --rc genhtml_legend=1 00:04:16.055 --rc geninfo_all_blocks=1 00:04:16.055 --rc geninfo_unexecuted_blocks=1 00:04:16.055 00:04:16.055 ' 00:04:16.055 10:03:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57258 00:04:16.055 10:03:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.055 10:03:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57258 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 57258 ']' 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.055 10:03:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.055 10:03:22 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:16.055 [2024-12-06 10:03:22.096687] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:16.055 [2024-12-06 10:03:22.096876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57258 ] 00:04:16.315 [2024-12-06 10:03:22.261693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.316 [2024-12-06 10:03:22.397820] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.316 [2024-12-06 10:03:22.397895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57258' to capture a snapshot of events at runtime. 00:04:16.316 [2024-12-06 10:03:22.397907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.316 [2024-12-06 10:03:22.397920] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.316 [2024-12-06 10:03:22.397929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57258 for offline analysis/debug. 00:04:16.316 [2024-12-06 10:03:22.398895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.260 10:03:23 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.260 10:03:23 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:17.260 10:03:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.260 10:03:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.260 10:03:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:17.260 10:03:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:17.260 10:03:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.260 10:03:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.260 10:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.260 ************************************ 00:04:17.260 START TEST rpc_integrity 00:04:17.260 ************************************ 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.260 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.260 { 00:04:17.260 "name": "Malloc0", 00:04:17.260 "aliases": [ 00:04:17.260 "86eea7ed-df79-4d14-b2df-a466e630b75b" 00:04:17.260 ], 00:04:17.260 "product_name": "Malloc disk", 00:04:17.260 "block_size": 512, 00:04:17.260 "num_blocks": 16384, 00:04:17.260 "uuid": "86eea7ed-df79-4d14-b2df-a466e630b75b", 00:04:17.260 "assigned_rate_limits": { 00:04:17.260 "rw_ios_per_sec": 0, 00:04:17.260 "rw_mbytes_per_sec": 0, 00:04:17.260 "r_mbytes_per_sec": 0, 00:04:17.260 "w_mbytes_per_sec": 0 00:04:17.260 }, 00:04:17.260 "claimed": false, 00:04:17.260 "zoned": false, 00:04:17.260 "supported_io_types": { 00:04:17.260 "read": true, 00:04:17.260 "write": true, 00:04:17.260 "unmap": true, 00:04:17.260 "flush": true, 00:04:17.260 "reset": true, 00:04:17.260 "nvme_admin": false, 00:04:17.260 "nvme_io": false, 00:04:17.260 "nvme_io_md": false, 00:04:17.260 "write_zeroes": true, 00:04:17.260 "zcopy": true, 00:04:17.260 "get_zone_info": false, 00:04:17.260 "zone_management": false, 00:04:17.260 "zone_append": false, 00:04:17.260 "compare": false, 00:04:17.260 "compare_and_write": false, 00:04:17.260 "abort": true, 00:04:17.260 "seek_hole": false, 00:04:17.260 "seek_data": false, 00:04:17.260 "copy": true, 00:04:17.260 "nvme_iov_md": false 00:04:17.260 }, 00:04:17.260 "memory_domains": [ 00:04:17.260 { 00:04:17.260 "dma_device_id": "system", 00:04:17.260 "dma_device_type": 1 00:04:17.260 }, 00:04:17.260 { 00:04:17.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.260 "dma_device_type": 2 00:04:17.260 } 00:04:17.260 ], 00:04:17.260 "driver_specific": {} 00:04:17.260 } 00:04:17.260 ]' 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.260 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 [2024-12-06 10:03:23.265507] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:17.261 [2024-12-06 10:03:23.265580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.261 [2024-12-06 10:03:23.265614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:17.261 [2024-12-06 10:03:23.265628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.261 [2024-12-06 10:03:23.268276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.261 [2024-12-06 10:03:23.268333] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.261 Passthru0 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.261 { 00:04:17.261 "name": "Malloc0", 00:04:17.261 "aliases": [ 00:04:17.261 "86eea7ed-df79-4d14-b2df-a466e630b75b" 00:04:17.261 ], 00:04:17.261 "product_name": "Malloc disk", 00:04:17.261 "block_size": 512, 00:04:17.261 "num_blocks": 16384, 00:04:17.261 "uuid": "86eea7ed-df79-4d14-b2df-a466e630b75b", 00:04:17.261 "assigned_rate_limits": { 00:04:17.261 "rw_ios_per_sec": 0, 00:04:17.261 "rw_mbytes_per_sec": 0, 00:04:17.261 "r_mbytes_per_sec": 0, 00:04:17.261 "w_mbytes_per_sec": 0 00:04:17.261 }, 00:04:17.261 "claimed": true, 00:04:17.261 "claim_type": "exclusive_write", 00:04:17.261 "zoned": false, 00:04:17.261 "supported_io_types": { 00:04:17.261 "read": true, 00:04:17.261 "write": true, 00:04:17.261 "unmap": true, 00:04:17.261 "flush": true, 00:04:17.261 "reset": true, 00:04:17.261 "nvme_admin": false, 00:04:17.261 "nvme_io": false, 00:04:17.261 "nvme_io_md": false, 00:04:17.261 "write_zeroes": true, 00:04:17.261 "zcopy": true, 00:04:17.261 "get_zone_info": false, 00:04:17.261 "zone_management": false, 00:04:17.261 "zone_append": false, 00:04:17.261 "compare": false, 00:04:17.261 "compare_and_write": false, 00:04:17.261 "abort": true, 00:04:17.261 "seek_hole": false, 00:04:17.261 "seek_data": false, 00:04:17.261 "copy": true, 00:04:17.261 "nvme_iov_md": false 00:04:17.261 }, 00:04:17.261 "memory_domains": [ 00:04:17.261 { 00:04:17.261 "dma_device_id": "system", 00:04:17.261 "dma_device_type": 1 00:04:17.261 }, 00:04:17.261 { 00:04:17.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.261 "dma_device_type": 2 00:04:17.261 } 00:04:17.261 ], 00:04:17.261 "driver_specific": {} 00:04:17.261 }, 00:04:17.261 { 00:04:17.261 "name": "Passthru0", 00:04:17.261 "aliases": [ 00:04:17.261 "553327d3-30a9-58d2-b9f5-cbd184a482f8" 00:04:17.261 ], 00:04:17.261 "product_name": "passthru", 00:04:17.261 "block_size": 512, 00:04:17.261 "num_blocks": 16384, 00:04:17.261 "uuid": "553327d3-30a9-58d2-b9f5-cbd184a482f8", 00:04:17.261 "assigned_rate_limits": { 00:04:17.261 "rw_ios_per_sec": 0, 00:04:17.261 "rw_mbytes_per_sec": 0, 00:04:17.261 "r_mbytes_per_sec": 0, 00:04:17.261 "w_mbytes_per_sec": 0 00:04:17.261 }, 00:04:17.261 "claimed": false, 00:04:17.261 "zoned": false, 00:04:17.261 "supported_io_types": { 00:04:17.261 "read": true, 00:04:17.261 "write": true, 00:04:17.261 "unmap": true, 00:04:17.261 "flush": true, 00:04:17.261 "reset": true, 00:04:17.261 "nvme_admin": false, 00:04:17.261 "nvme_io": false, 00:04:17.261 "nvme_io_md": false, 00:04:17.261 "write_zeroes": true, 00:04:17.261 "zcopy": true, 00:04:17.261 "get_zone_info": false, 00:04:17.261 "zone_management": false, 00:04:17.261 "zone_append": false, 00:04:17.261 "compare": false, 00:04:17.261 "compare_and_write": false, 00:04:17.261 "abort": true, 00:04:17.261 "seek_hole": false, 00:04:17.261 "seek_data": false, 00:04:17.261 "copy": true, 00:04:17.261 "nvme_iov_md": false 00:04:17.261 }, 00:04:17.261 "memory_domains": [ 00:04:17.261 { 00:04:17.261 "dma_device_id": "system", 00:04:17.261 "dma_device_type": 1 00:04:17.261 }, 00:04:17.261 { 00:04:17.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.261 "dma_device_type": 2 00:04:17.261 } 00:04:17.261 ], 00:04:17.261 "driver_specific": { 00:04:17.261 "passthru": { 00:04:17.261 "name": "Passthru0", 00:04:17.261 "base_bdev_name": "Malloc0" 00:04:17.261 } 00:04:17.261 } 00:04:17.261 } 00:04:17.261 ]' 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.261 ************************************ 00:04:17.261 END TEST rpc_integrity 00:04:17.261 10:03:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.261 00:04:17.261 real 0m0.260s 00:04:17.261 user 0m0.127s 00:04:17.261 sys 0m0.039s 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.261 10:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.261 ************************************ 00:04:17.522 10:03:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:17.522 10:03:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.522 10:03:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.522 10:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.522 ************************************ 00:04:17.522 START TEST rpc_plugins 00:04:17.522 ************************************ 00:04:17.522 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:17.522 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:17.522 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:17.523 { 00:04:17.523 "name": "Malloc1", 00:04:17.523 "aliases": [ 00:04:17.523 "1639c6f1-97b8-43fa-9676-83dc7d151ebf" 00:04:17.523 ], 00:04:17.523 "product_name": "Malloc disk", 00:04:17.523 "block_size": 4096, 00:04:17.523 "num_blocks": 256, 00:04:17.523 "uuid": "1639c6f1-97b8-43fa-9676-83dc7d151ebf", 00:04:17.523 "assigned_rate_limits": { 00:04:17.523 "rw_ios_per_sec": 0, 00:04:17.523 "rw_mbytes_per_sec": 0, 00:04:17.523 "r_mbytes_per_sec": 0, 00:04:17.523 "w_mbytes_per_sec": 0 00:04:17.523 }, 00:04:17.523 "claimed": false, 00:04:17.523 "zoned": false, 00:04:17.523 "supported_io_types": { 00:04:17.523 "read": true, 00:04:17.523 "write": true, 00:04:17.523 "unmap": true, 00:04:17.523 "flush": true, 00:04:17.523 "reset": true, 00:04:17.523 "nvme_admin": false, 00:04:17.523 "nvme_io": false, 00:04:17.523 "nvme_io_md": false, 00:04:17.523 "write_zeroes": true, 00:04:17.523 "zcopy": true, 00:04:17.523 "get_zone_info": false, 00:04:17.523 "zone_management": false, 00:04:17.523 "zone_append": false, 00:04:17.523 "compare": false, 00:04:17.523 "compare_and_write": false, 00:04:17.523 "abort": true, 00:04:17.523 "seek_hole": false, 00:04:17.523 "seek_data": false, 00:04:17.523 "copy": true, 00:04:17.523 "nvme_iov_md": false 00:04:17.523 }, 00:04:17.523 "memory_domains": [ 00:04:17.523 { 00:04:17.523 "dma_device_id": "system", 00:04:17.523 "dma_device_type": 1 00:04:17.523 }, 00:04:17.523 { 00:04:17.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.523 "dma_device_type": 2 00:04:17.523 } 00:04:17.523 ], 00:04:17.523 "driver_specific": {} 00:04:17.523 } 00:04:17.523 ]' 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:17.523 10:03:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.523 00:04:17.523 real 0m0.119s 00:04:17.523 user 0m0.057s 00:04:17.523 sys 0m0.019s 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 ************************************ 00:04:17.523 END TEST rpc_plugins 00:04:17.523 ************************************ 00:04:17.523 10:03:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.523 10:03:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.523 10:03:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.523 10:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.523 ************************************ 00:04:17.523 START TEST rpc_trace_cmd_test 00:04:17.523 ************************************ 00:04:17.523 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:17.523 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.523 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.523 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.523 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.783 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.783 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.783 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57258", 00:04:17.783 "tpoint_group_mask": "0x8", 00:04:17.783 "iscsi_conn": { 00:04:17.783 "mask": "0x2", 00:04:17.783 "tpoint_mask": "0x0" 00:04:17.783 }, 00:04:17.783 "scsi": { 00:04:17.783 "mask": "0x4", 00:04:17.783 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "bdev": { 00:04:17.784 "mask": "0x8", 00:04:17.784 "tpoint_mask": "0xffffffffffffffff" 00:04:17.784 }, 00:04:17.784 "nvmf_rdma": { 00:04:17.784 "mask": "0x10", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "nvmf_tcp": { 00:04:17.784 "mask": "0x20", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "ftl": { 00:04:17.784 "mask": "0x40", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "blobfs": { 00:04:17.784 "mask": "0x80", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "dsa": { 00:04:17.784 "mask": "0x200", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "thread": { 00:04:17.784 "mask": "0x400", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "nvme_pcie": { 00:04:17.784 "mask": "0x800", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "iaa": { 00:04:17.784 "mask": "0x1000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "nvme_tcp": { 00:04:17.784 "mask": "0x2000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "bdev_nvme": { 00:04:17.784 "mask": "0x4000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "sock": { 00:04:17.784 "mask": "0x8000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "blob": { 00:04:17.784 "mask": "0x10000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "bdev_raid": { 00:04:17.784 "mask": "0x20000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 }, 00:04:17.784 "scheduler": { 00:04:17.784 "mask": "0x40000", 00:04:17.784 "tpoint_mask": "0x0" 00:04:17.784 } 00:04:17.784 }' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.784 00:04:17.784 real 0m0.180s 00:04:17.784 user 0m0.142s 00:04:17.784 sys 0m0.025s 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.784 10:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.784 ************************************ 00:04:17.784 END TEST rpc_trace_cmd_test 00:04:17.784 ************************************ 00:04:17.784 10:03:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.784 10:03:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.784 10:03:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.784 10:03:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.784 10:03:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.784 10:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.784 ************************************ 00:04:17.784 START TEST rpc_daemon_integrity 00:04:17.784 ************************************ 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.784 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.046 { 00:04:18.046 "name": "Malloc2", 00:04:18.046 "aliases": [ 00:04:18.046 "4be49473-5d3d-4262-8d4f-84788c575a31" 00:04:18.046 ], 00:04:18.046 "product_name": "Malloc disk", 00:04:18.046 "block_size": 512, 00:04:18.046 "num_blocks": 16384, 00:04:18.046 "uuid": "4be49473-5d3d-4262-8d4f-84788c575a31", 00:04:18.046 "assigned_rate_limits": { 00:04:18.046 "rw_ios_per_sec": 0, 00:04:18.046 "rw_mbytes_per_sec": 0, 00:04:18.046 "r_mbytes_per_sec": 0, 00:04:18.046 "w_mbytes_per_sec": 0 00:04:18.046 }, 00:04:18.046 "claimed": false, 00:04:18.046 "zoned": false, 00:04:18.046 "supported_io_types": { 00:04:18.046 "read": true, 00:04:18.046 "write": true, 00:04:18.046 "unmap": true, 00:04:18.046 "flush": true, 00:04:18.046 "reset": true, 00:04:18.046 "nvme_admin": false, 00:04:18.046 "nvme_io": false, 00:04:18.046 "nvme_io_md": false, 00:04:18.046 "write_zeroes": true, 00:04:18.046 "zcopy": true, 00:04:18.046 "get_zone_info": false, 00:04:18.046 "zone_management": false, 00:04:18.046 "zone_append": false, 00:04:18.046 "compare": false, 00:04:18.046 "compare_and_write": false, 00:04:18.046 "abort": true, 00:04:18.046 "seek_hole": false, 00:04:18.046 "seek_data": false, 00:04:18.046 "copy": true, 00:04:18.046 "nvme_iov_md": false 00:04:18.046 }, 00:04:18.046 "memory_domains": [ 00:04:18.046 { 00:04:18.046 "dma_device_id": "system", 00:04:18.046 "dma_device_type": 1 00:04:18.046 }, 00:04:18.046 { 00:04:18.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.046 "dma_device_type": 2 00:04:18.046 } 00:04:18.046 ], 00:04:18.046 "driver_specific": {} 00:04:18.046 } 00:04:18.046 ]' 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 [2024-12-06 10:03:24.050499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:18.046 [2024-12-06 10:03:24.050573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.046 [2024-12-06 10:03:24.050599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:18.046 [2024-12-06 10:03:24.050612] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.046 [2024-12-06 10:03:24.053268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.046 [2024-12-06 10:03:24.053323] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.046 Passthru0 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.046 { 00:04:18.046 "name": "Malloc2", 00:04:18.046 "aliases": [ 00:04:18.046 "4be49473-5d3d-4262-8d4f-84788c575a31" 00:04:18.046 ], 00:04:18.046 "product_name": "Malloc disk", 00:04:18.046 "block_size": 512, 00:04:18.046 "num_blocks": 16384, 00:04:18.046 "uuid": "4be49473-5d3d-4262-8d4f-84788c575a31", 00:04:18.046 "assigned_rate_limits": { 00:04:18.046 "rw_ios_per_sec": 0, 00:04:18.046 "rw_mbytes_per_sec": 0, 00:04:18.046 "r_mbytes_per_sec": 0, 00:04:18.046 "w_mbytes_per_sec": 0 00:04:18.046 }, 00:04:18.046 "claimed": true, 00:04:18.046 "claim_type": "exclusive_write", 00:04:18.046 "zoned": false, 00:04:18.046 "supported_io_types": { 00:04:18.046 "read": true, 00:04:18.046 "write": true, 00:04:18.046 "unmap": true, 00:04:18.046 "flush": true, 00:04:18.046 "reset": true, 00:04:18.046 "nvme_admin": false, 00:04:18.046 "nvme_io": false, 00:04:18.046 "nvme_io_md": false, 00:04:18.046 "write_zeroes": true, 00:04:18.046 "zcopy": true, 00:04:18.046 "get_zone_info": false, 00:04:18.046 "zone_management": false, 00:04:18.046 "zone_append": false, 00:04:18.046 "compare": false, 00:04:18.046 "compare_and_write": false, 00:04:18.046 "abort": true, 00:04:18.046 "seek_hole": false, 00:04:18.046 "seek_data": false, 00:04:18.046 "copy": true, 00:04:18.046 "nvme_iov_md": false 00:04:18.046 }, 00:04:18.046 "memory_domains": [ 00:04:18.046 { 00:04:18.046 "dma_device_id": "system", 00:04:18.046 "dma_device_type": 1 00:04:18.046 }, 00:04:18.046 { 00:04:18.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.046 "dma_device_type": 2 00:04:18.046 } 00:04:18.046 ], 00:04:18.046 "driver_specific": {} 00:04:18.046 }, 00:04:18.046 { 00:04:18.046 "name": "Passthru0", 00:04:18.046 "aliases": [ 00:04:18.046 "4c6f6c75-b556-5803-b0a6-9fc9ab42e02a" 00:04:18.046 ], 00:04:18.046 "product_name": "passthru", 00:04:18.046 "block_size": 512, 00:04:18.046 "num_blocks": 16384, 00:04:18.046 "uuid": "4c6f6c75-b556-5803-b0a6-9fc9ab42e02a", 00:04:18.046 "assigned_rate_limits": { 00:04:18.046 "rw_ios_per_sec": 0, 00:04:18.046 "rw_mbytes_per_sec": 0, 00:04:18.046 "r_mbytes_per_sec": 0, 00:04:18.046 "w_mbytes_per_sec": 0 00:04:18.046 }, 00:04:18.046 "claimed": false, 00:04:18.046 "zoned": false, 00:04:18.046 "supported_io_types": { 00:04:18.046 "read": true, 00:04:18.046 "write": true, 00:04:18.046 "unmap": true, 00:04:18.046 "flush": true, 00:04:18.046 "reset": true, 00:04:18.046 "nvme_admin": false, 00:04:18.046 "nvme_io": false, 00:04:18.046 "nvme_io_md": false, 00:04:18.046 "write_zeroes": true, 00:04:18.046 "zcopy": true, 00:04:18.046 "get_zone_info": false, 00:04:18.046 "zone_management": false, 00:04:18.046 "zone_append": false, 00:04:18.046 "compare": false, 00:04:18.046 "compare_and_write": false, 00:04:18.046 "abort": true, 00:04:18.046 "seek_hole": false, 00:04:18.046 "seek_data": false, 00:04:18.046 "copy": true, 00:04:18.046 "nvme_iov_md": false 00:04:18.046 }, 00:04:18.046 "memory_domains": [ 00:04:18.046 { 00:04:18.046 "dma_device_id": "system", 00:04:18.046 "dma_device_type": 1 00:04:18.046 }, 00:04:18.046 { 00:04:18.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.046 "dma_device_type": 2 00:04:18.046 } 00:04:18.046 ], 00:04:18.046 "driver_specific": { 00:04:18.046 "passthru": { 00:04:18.046 "name": "Passthru0", 00:04:18.046 "base_bdev_name": "Malloc2" 00:04:18.046 } 00:04:18.046 } 00:04:18.046 } 00:04:18.046 ]' 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:18.046 10:03:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.046 00:04:18.046 real 0m0.264s 00:04:18.047 user 0m0.132s 00:04:18.047 sys 0m0.040s 00:04:18.047 ************************************ 00:04:18.047 END TEST rpc_daemon_integrity 00:04:18.047 ************************************ 00:04:18.047 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.047 10:03:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:18.308 10:03:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:18.308 10:03:24 rpc -- rpc/rpc.sh@84 -- # killprocess 57258 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 57258 ']' 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@958 -- # kill -0 57258 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@959 -- # uname 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57258 00:04:18.308 killing process with pid 57258 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57258' 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@973 -- # kill 57258 00:04:18.308 10:03:24 rpc -- common/autotest_common.sh@978 -- # wait 57258 00:04:20.225 00:04:20.225 real 0m4.121s 00:04:20.225 user 0m4.418s 00:04:20.225 sys 0m0.794s 00:04:20.225 10:03:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.225 10:03:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.225 ************************************ 00:04:20.225 END TEST rpc 00:04:20.225 ************************************ 00:04:20.225 10:03:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.225 10:03:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.225 10:03:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.225 10:03:26 -- common/autotest_common.sh@10 -- # set +x 00:04:20.225 ************************************ 00:04:20.225 START TEST skip_rpc 00:04:20.225 ************************************ 00:04:20.225 10:03:26 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:20.225 * Looking for test storage... 00:04:20.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:20.225 10:03:26 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:20.225 10:03:26 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:20.225 10:03:26 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:20.225 10:03:26 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.225 10:03:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.226 10:03:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.226 --rc genhtml_branch_coverage=1 00:04:20.226 --rc genhtml_function_coverage=1 00:04:20.226 --rc genhtml_legend=1 00:04:20.226 --rc geninfo_all_blocks=1 00:04:20.226 --rc geninfo_unexecuted_blocks=1 00:04:20.226 00:04:20.226 ' 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.226 --rc genhtml_branch_coverage=1 00:04:20.226 --rc genhtml_function_coverage=1 00:04:20.226 --rc genhtml_legend=1 00:04:20.226 --rc geninfo_all_blocks=1 00:04:20.226 --rc geninfo_unexecuted_blocks=1 00:04:20.226 00:04:20.226 ' 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.226 --rc genhtml_branch_coverage=1 00:04:20.226 --rc genhtml_function_coverage=1 00:04:20.226 --rc genhtml_legend=1 00:04:20.226 --rc geninfo_all_blocks=1 00:04:20.226 --rc geninfo_unexecuted_blocks=1 00:04:20.226 00:04:20.226 ' 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:20.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.226 --rc genhtml_branch_coverage=1 00:04:20.226 --rc genhtml_function_coverage=1 00:04:20.226 --rc genhtml_legend=1 00:04:20.226 --rc geninfo_all_blocks=1 00:04:20.226 --rc geninfo_unexecuted_blocks=1 00:04:20.226 00:04:20.226 ' 00:04:20.226 10:03:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.226 10:03:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:20.226 10:03:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.226 10:03:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.226 ************************************ 00:04:20.226 START TEST skip_rpc 00:04:20.226 ************************************ 00:04:20.226 10:03:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:20.226 10:03:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57476 00:04:20.226 10:03:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.226 10:03:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:20.226 10:03:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:20.226 [2024-12-06 10:03:26.297748] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:20.226 [2024-12-06 10:03:26.297906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57476 ] 00:04:20.487 [2024-12-06 10:03:26.464008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.487 [2024-12-06 10:03:26.606338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57476 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57476 ']' 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57476 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57476 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.783 killing process with pid 57476 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57476' 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57476 00:04:25.783 10:03:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57476 00:04:27.168 00:04:27.168 real 0m6.779s 00:04:27.168 user 0m6.215s 00:04:27.168 sys 0m0.438s 00:04:27.168 10:03:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.168 ************************************ 00:04:27.168 END TEST skip_rpc 00:04:27.168 ************************************ 00:04:27.168 10:03:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.168 10:03:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.168 10:03:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.168 10:03:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.168 10:03:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.168 ************************************ 00:04:27.168 START TEST skip_rpc_with_json 00:04:27.168 ************************************ 00:04:27.168 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57574 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57574 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57574 ']' 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:27.169 10:03:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.169 [2024-12-06 10:03:33.169142] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:27.169 [2024-12-06 10:03:33.169345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57574 ] 00:04:27.430 [2024-12-06 10:03:33.339907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.430 [2024-12-06 10:03:33.481265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.371 [2024-12-06 10:03:34.218749] nvmf_rpc.c:2872:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.371 request: 00:04:28.371 { 00:04:28.371 "trtype": "tcp", 00:04:28.371 "method": "nvmf_get_transports", 00:04:28.371 "req_id": 1 00:04:28.371 } 00:04:28.371 Got JSON-RPC error response 00:04:28.371 response: 00:04:28.371 { 00:04:28.371 "code": -19, 00:04:28.371 "message": "No such device" 00:04:28.371 } 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.371 [2024-12-06 10:03:34.230891] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:28.371 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.371 { 00:04:28.371 "subsystems": [ 00:04:28.371 { 00:04:28.371 "subsystem": "fsdev", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "fsdev_set_opts", 00:04:28.371 "params": { 00:04:28.371 "fsdev_io_pool_size": 65535, 00:04:28.371 "fsdev_io_cache_size": 256 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "keyring", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "iobuf", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "iobuf_set_options", 00:04:28.371 "params": { 00:04:28.371 "small_pool_count": 8192, 00:04:28.371 "large_pool_count": 1024, 00:04:28.371 "small_bufsize": 8192, 00:04:28.371 "large_bufsize": 135168, 00:04:28.371 "enable_numa": false 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "sock", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "sock_set_default_impl", 00:04:28.371 "params": { 00:04:28.371 "impl_name": "posix" 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "sock_impl_set_options", 00:04:28.371 "params": { 00:04:28.371 "impl_name": "ssl", 00:04:28.371 "recv_buf_size": 4096, 00:04:28.371 "send_buf_size": 4096, 00:04:28.371 "enable_recv_pipe": true, 00:04:28.371 "enable_quickack": false, 00:04:28.371 "enable_placement_id": 0, 00:04:28.371 "enable_zerocopy_send_server": true, 00:04:28.371 "enable_zerocopy_send_client": false, 00:04:28.371 "zerocopy_threshold": 0, 00:04:28.371 "tls_version": 0, 00:04:28.371 "enable_ktls": false 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "sock_impl_set_options", 00:04:28.371 "params": { 00:04:28.371 "impl_name": "posix", 00:04:28.371 "recv_buf_size": 2097152, 00:04:28.371 "send_buf_size": 2097152, 00:04:28.371 "enable_recv_pipe": true, 00:04:28.371 "enable_quickack": false, 00:04:28.371 "enable_placement_id": 0, 00:04:28.371 "enable_zerocopy_send_server": true, 00:04:28.371 "enable_zerocopy_send_client": false, 00:04:28.371 "zerocopy_threshold": 0, 00:04:28.371 "tls_version": 0, 00:04:28.371 "enable_ktls": false 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "vmd", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "accel", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "accel_set_options", 00:04:28.371 "params": { 00:04:28.371 "small_cache_size": 128, 00:04:28.371 "large_cache_size": 16, 00:04:28.371 "task_count": 2048, 00:04:28.371 "sequence_count": 2048, 00:04:28.371 "buf_count": 2048 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "bdev", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "bdev_set_options", 00:04:28.371 "params": { 00:04:28.371 "bdev_io_pool_size": 65535, 00:04:28.371 "bdev_io_cache_size": 256, 00:04:28.371 "bdev_auto_examine": true, 00:04:28.371 "iobuf_small_cache_size": 128, 00:04:28.371 "iobuf_large_cache_size": 16 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "bdev_raid_set_options", 00:04:28.371 "params": { 00:04:28.371 "process_window_size_kb": 1024, 00:04:28.371 "process_max_bandwidth_mb_sec": 0 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "bdev_iscsi_set_options", 00:04:28.371 "params": { 00:04:28.371 "timeout_sec": 30 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "bdev_nvme_set_options", 00:04:28.371 "params": { 00:04:28.371 "action_on_timeout": "none", 00:04:28.371 "timeout_us": 0, 00:04:28.371 "timeout_admin_us": 0, 00:04:28.371 "keep_alive_timeout_ms": 10000, 00:04:28.371 "arbitration_burst": 0, 00:04:28.371 "low_priority_weight": 0, 00:04:28.371 "medium_priority_weight": 0, 00:04:28.371 "high_priority_weight": 0, 00:04:28.371 "nvme_adminq_poll_period_us": 10000, 00:04:28.371 "nvme_ioq_poll_period_us": 0, 00:04:28.371 "io_queue_requests": 0, 00:04:28.371 "delay_cmd_submit": true, 00:04:28.371 "transport_retry_count": 4, 00:04:28.371 "bdev_retry_count": 3, 00:04:28.371 "transport_ack_timeout": 0, 00:04:28.371 "ctrlr_loss_timeout_sec": 0, 00:04:28.371 "reconnect_delay_sec": 0, 00:04:28.371 "fast_io_fail_timeout_sec": 0, 00:04:28.371 "disable_auto_failback": false, 00:04:28.371 "generate_uuids": false, 00:04:28.371 "transport_tos": 0, 00:04:28.371 "nvme_error_stat": false, 00:04:28.371 "rdma_srq_size": 0, 00:04:28.371 "io_path_stat": false, 00:04:28.371 "allow_accel_sequence": false, 00:04:28.371 "rdma_max_cq_size": 0, 00:04:28.371 "rdma_cm_event_timeout_ms": 0, 00:04:28.371 "dhchap_digests": [ 00:04:28.371 "sha256", 00:04:28.371 "sha384", 00:04:28.371 "sha512" 00:04:28.371 ], 00:04:28.371 "dhchap_dhgroups": [ 00:04:28.371 "null", 00:04:28.371 "ffdhe2048", 00:04:28.371 "ffdhe3072", 00:04:28.371 "ffdhe4096", 00:04:28.371 "ffdhe6144", 00:04:28.371 "ffdhe8192" 00:04:28.371 ] 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "bdev_nvme_set_hotplug", 00:04:28.371 "params": { 00:04:28.371 "period_us": 100000, 00:04:28.371 "enable": false 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "bdev_wait_for_examine" 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "scsi", 00:04:28.371 "config": null 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "scheduler", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "framework_set_scheduler", 00:04:28.371 "params": { 00:04:28.371 "name": "static" 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "vhost_scsi", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "vhost_blk", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "ublk", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "nbd", 00:04:28.371 "config": [] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "nvmf", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "nvmf_set_config", 00:04:28.371 "params": { 00:04:28.371 "discovery_filter": "match_any", 00:04:28.371 "admin_cmd_passthru": { 00:04:28.371 "identify_ctrlr": false 00:04:28.371 }, 00:04:28.371 "dhchap_digests": [ 00:04:28.371 "sha256", 00:04:28.371 "sha384", 00:04:28.371 "sha512" 00:04:28.371 ], 00:04:28.371 "dhchap_dhgroups": [ 00:04:28.371 "null", 00:04:28.371 "ffdhe2048", 00:04:28.371 "ffdhe3072", 00:04:28.371 "ffdhe4096", 00:04:28.371 "ffdhe6144", 00:04:28.371 "ffdhe8192" 00:04:28.371 ] 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "nvmf_set_max_subsystems", 00:04:28.371 "params": { 00:04:28.371 "max_subsystems": 1024 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "nvmf_set_crdt", 00:04:28.371 "params": { 00:04:28.371 "crdt1": 0, 00:04:28.371 "crdt2": 0, 00:04:28.371 "crdt3": 0 00:04:28.371 } 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "method": "nvmf_create_transport", 00:04:28.371 "params": { 00:04:28.371 "trtype": "TCP", 00:04:28.371 "max_queue_depth": 128, 00:04:28.371 "max_io_qpairs_per_ctrlr": 127, 00:04:28.371 "in_capsule_data_size": 4096, 00:04:28.371 "max_io_size": 131072, 00:04:28.371 "io_unit_size": 131072, 00:04:28.371 "max_aq_depth": 128, 00:04:28.371 "num_shared_buffers": 511, 00:04:28.371 "buf_cache_size": 4294967295, 00:04:28.371 "dif_insert_or_strip": false, 00:04:28.371 "zcopy": false, 00:04:28.371 "c2h_success": true, 00:04:28.371 "sock_priority": 0, 00:04:28.371 "abort_timeout_sec": 1, 00:04:28.371 "ack_timeout": 0, 00:04:28.371 "data_wr_pool_size": 0 00:04:28.371 } 00:04:28.371 } 00:04:28.371 ] 00:04:28.371 }, 00:04:28.371 { 00:04:28.371 "subsystem": "iscsi", 00:04:28.371 "config": [ 00:04:28.371 { 00:04:28.371 "method": "iscsi_set_options", 00:04:28.371 "params": { 00:04:28.371 "node_base": "iqn.2016-06.io.spdk", 00:04:28.371 "max_sessions": 128, 00:04:28.371 "max_connections_per_session": 2, 00:04:28.371 "max_queue_depth": 64, 00:04:28.371 "default_time2wait": 2, 00:04:28.371 "default_time2retain": 20, 00:04:28.371 "first_burst_length": 8192, 00:04:28.371 "immediate_data": true, 00:04:28.371 "allow_duplicated_isid": false, 00:04:28.371 "error_recovery_level": 0, 00:04:28.371 "nop_timeout": 60, 00:04:28.371 "nop_in_interval": 30, 00:04:28.371 "disable_chap": false, 00:04:28.371 "require_chap": false, 00:04:28.371 "mutual_chap": false, 00:04:28.371 "chap_group": 0, 00:04:28.371 "max_large_datain_per_connection": 64, 00:04:28.371 "max_r2t_per_connection": 4, 00:04:28.371 "pdu_pool_size": 36864, 00:04:28.371 "immediate_data_pool_size": 16384, 00:04:28.371 "data_out_pool_size": 2048 00:04:28.372 } 00:04:28.372 } 00:04:28.372 ] 00:04:28.372 } 00:04:28.372 ] 00:04:28.372 } 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57574 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57574 ']' 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57574 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57574 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:28.372 killing process with pid 57574 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57574' 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57574 00:04:28.372 10:03:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57574 00:04:30.281 10:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57619 00:04:30.281 10:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.281 10:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57619 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57619 ']' 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57619 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57619 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.668 killing process with pid 57619 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57619' 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57619 00:04:35.668 10:03:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57619 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.054 00:04:37.054 real 0m9.844s 00:04:37.054 user 0m9.149s 00:04:37.054 sys 0m0.925s 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.054 ************************************ 00:04:37.054 END TEST skip_rpc_with_json 00:04:37.054 ************************************ 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.054 10:03:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:37.054 10:03:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.054 10:03:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.054 10:03:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.054 ************************************ 00:04:37.054 START TEST skip_rpc_with_delay 00:04:37.054 ************************************ 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.054 10:03:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:37.054 [2024-12-06 10:03:43.078127] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:37.054 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:37.054 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.054 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:37.054 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.055 00:04:37.055 real 0m0.148s 00:04:37.055 user 0m0.070s 00:04:37.055 sys 0m0.076s 00:04:37.055 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.055 ************************************ 00:04:37.055 END TEST skip_rpc_with_delay 00:04:37.055 ************************************ 00:04:37.055 10:03:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:37.055 10:03:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:37.055 10:03:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:37.055 10:03:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:37.055 10:03:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.055 10:03:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.055 10:03:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.055 ************************************ 00:04:37.055 START TEST exit_on_failed_rpc_init 00:04:37.055 ************************************ 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:37.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57747 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57747 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57747 ']' 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.055 10:03:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.334 [2024-12-06 10:03:43.288868] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:37.334 [2024-12-06 10:03:43.289036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57747 ] 00:04:37.334 [2024-12-06 10:03:43.451375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.638 [2024-12-06 10:03:43.590984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.208 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:38.470 [2024-12-06 10:03:44.446591] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:38.470 [2024-12-06 10:03:44.446751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57765 ] 00:04:38.470 [2024-12-06 10:03:44.610993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.732 [2024-12-06 10:03:44.754762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.732 [2024-12-06 10:03:44.754878] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:38.732 [2024-12-06 10:03:44.754898] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:38.732 [2024-12-06 10:03:44.754910] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57747 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57747 ']' 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57747 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57747 00:04:38.995 killing process with pid 57747 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57747' 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57747 00:04:38.995 10:03:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57747 00:04:40.897 00:04:40.897 real 0m3.430s 00:04:40.897 user 0m3.690s 00:04:40.897 sys 0m0.606s 00:04:40.897 10:03:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.897 ************************************ 00:04:40.897 END TEST exit_on_failed_rpc_init 00:04:40.897 ************************************ 00:04:40.897 10:03:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.897 10:03:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.897 00:04:40.897 real 0m20.641s 00:04:40.897 user 0m19.289s 00:04:40.897 sys 0m2.229s 00:04:40.897 10:03:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.897 ************************************ 00:04:40.897 END TEST skip_rpc 00:04:40.897 ************************************ 00:04:40.897 10:03:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.897 10:03:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.897 10:03:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.897 10:03:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.897 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.897 ************************************ 00:04:40.897 START TEST rpc_client 00:04:40.897 ************************************ 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.897 * Looking for test storage... 00:04:40.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.897 10:03:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 10:03:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.897 OK 00:04:40.897 10:03:46 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.897 00:04:40.897 real 0m0.198s 00:04:40.897 user 0m0.109s 00:04:40.897 sys 0m0.091s 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.897 ************************************ 00:04:40.897 END TEST rpc_client 00:04:40.897 ************************************ 00:04:40.897 10:03:46 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.897 10:03:46 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.897 10:03:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.897 10:03:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.897 10:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.897 ************************************ 00:04:40.897 START TEST json_config 00:04:40.897 ************************************ 00:04:40.897 10:03:46 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.897 10:03:47 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.897 10:03:47 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.897 10:03:47 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.158 10:03:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.158 10:03:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.158 10:03:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.158 10:03:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.158 10:03:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.158 10:03:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:41.158 10:03:47 json_config -- scripts/common.sh@345 -- # : 1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.158 10:03:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.158 10:03:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@353 -- # local d=1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.158 10:03:47 json_config -- scripts/common.sh@355 -- # echo 1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.158 10:03:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@353 -- # local d=2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.158 10:03:47 json_config -- scripts/common.sh@355 -- # echo 2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.158 10:03:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.158 10:03:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.158 10:03:47 json_config -- scripts/common.sh@368 -- # return 0 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.158 --rc genhtml_branch_coverage=1 00:04:41.158 --rc genhtml_function_coverage=1 00:04:41.158 --rc genhtml_legend=1 00:04:41.158 --rc geninfo_all_blocks=1 00:04:41.158 --rc geninfo_unexecuted_blocks=1 00:04:41.158 00:04:41.158 ' 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.158 --rc genhtml_branch_coverage=1 00:04:41.158 --rc genhtml_function_coverage=1 00:04:41.158 --rc genhtml_legend=1 00:04:41.158 --rc geninfo_all_blocks=1 00:04:41.158 --rc geninfo_unexecuted_blocks=1 00:04:41.158 00:04:41.158 ' 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.158 --rc genhtml_branch_coverage=1 00:04:41.158 --rc genhtml_function_coverage=1 00:04:41.158 --rc genhtml_legend=1 00:04:41.158 --rc geninfo_all_blocks=1 00:04:41.158 --rc geninfo_unexecuted_blocks=1 00:04:41.158 00:04:41.158 ' 00:04:41.158 10:03:47 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.158 --rc genhtml_branch_coverage=1 00:04:41.158 --rc genhtml_function_coverage=1 00:04:41.158 --rc genhtml_legend=1 00:04:41.158 --rc geninfo_all_blocks=1 00:04:41.158 --rc geninfo_unexecuted_blocks=1 00:04:41.158 00:04:41.158 ' 00:04:41.158 10:03:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.158 10:03:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.159 10:03:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.159 10:03:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.159 10:03:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.159 10:03:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.159 10:03:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.159 10:03:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.159 10:03:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.159 10:03:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:41.159 10:03:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@51 -- # : 0 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.159 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.159 10:03:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.159 WARNING: No tests are enabled so not running JSON configuration tests 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:41.159 10:03:47 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:41.159 00:04:41.159 real 0m0.153s 00:04:41.159 user 0m0.095s 00:04:41.159 sys 0m0.055s 00:04:41.159 10:03:47 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.159 10:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 ************************************ 00:04:41.159 END TEST json_config 00:04:41.159 ************************************ 00:04:41.159 10:03:47 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.159 10:03:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.159 10:03:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.159 10:03:47 -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 ************************************ 00:04:41.159 START TEST json_config_extra_key 00:04:41.159 ************************************ 00:04:41.159 10:03:47 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:41.159 10:03:47 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:41.159 10:03:47 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:41.159 10:03:47 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:41.159 10:03:47 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.159 10:03:47 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.419 10:03:47 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.419 10:03:47 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:41.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.419 --rc genhtml_branch_coverage=1 00:04:41.419 --rc genhtml_function_coverage=1 00:04:41.419 --rc genhtml_legend=1 00:04:41.419 --rc geninfo_all_blocks=1 00:04:41.419 --rc geninfo_unexecuted_blocks=1 00:04:41.419 00:04:41.419 ' 00:04:41.419 10:03:47 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:41.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.419 --rc genhtml_branch_coverage=1 00:04:41.419 --rc genhtml_function_coverage=1 00:04:41.419 --rc genhtml_legend=1 00:04:41.419 --rc geninfo_all_blocks=1 00:04:41.419 --rc geninfo_unexecuted_blocks=1 00:04:41.419 00:04:41.419 ' 00:04:41.419 10:03:47 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:41.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.419 --rc genhtml_branch_coverage=1 00:04:41.419 --rc genhtml_function_coverage=1 00:04:41.419 --rc genhtml_legend=1 00:04:41.419 --rc geninfo_all_blocks=1 00:04:41.419 --rc geninfo_unexecuted_blocks=1 00:04:41.419 00:04:41.419 ' 00:04:41.419 10:03:47 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:41.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.419 --rc genhtml_branch_coverage=1 00:04:41.419 --rc genhtml_function_coverage=1 00:04:41.419 --rc genhtml_legend=1 00:04:41.419 --rc geninfo_all_blocks=1 00:04:41.419 --rc geninfo_unexecuted_blocks=1 00:04:41.419 00:04:41.419 ' 00:04:41.419 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=4c76d917-bf3c-4293-bcbd-5d30cdf374fa 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.419 10:03:47 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.419 10:03:47 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.419 10:03:47 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.419 10:03:47 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.419 10:03:47 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.419 10:03:47 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.419 10:03:47 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.420 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.420 10:03:47 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.420 10:03:47 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.420 10:03:47 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.420 INFO: launching applications... 00:04:41.420 10:03:47 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57964 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.420 Waiting for target to run... 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57964 /var/tmp/spdk_tgt.sock 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57964 ']' 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.420 10:03:47 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.420 10:03:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.420 [2024-12-06 10:03:47.429458] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:41.420 [2024-12-06 10:03:47.429900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57964 ] 00:04:41.678 [2024-12-06 10:03:47.753696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.951 [2024-12-06 10:03:47.848003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.261 00:04:42.261 INFO: shutting down applications... 00:04:42.261 10:03:48 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.261 10:03:48 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.261 10:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.261 10:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57964 ]] 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57964 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57964 00:04:42.261 10:03:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.828 10:03:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.828 10:03:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.828 10:03:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57964 00:04:42.828 10:03:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.393 10:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.393 10:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.393 10:03:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57964 00:04:43.393 10:03:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.956 10:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.956 10:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.956 10:03:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57964 00:04:43.956 10:03:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57964 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:44.212 SPDK target shutdown done 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.212 10:03:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.212 Success 00:04:44.212 10:03:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:44.212 ************************************ 00:04:44.212 END TEST json_config_extra_key 00:04:44.212 ************************************ 00:04:44.212 00:04:44.212 real 0m3.162s 00:04:44.212 user 0m2.792s 00:04:44.212 sys 0m0.422s 00:04:44.212 10:03:50 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.212 10:03:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.468 10:03:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.468 10:03:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.468 10:03:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.468 10:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.468 ************************************ 00:04:44.468 START TEST alias_rpc 00:04:44.468 ************************************ 00:04:44.468 10:03:50 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:44.468 * Looking for test storage... 00:04:44.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:44.468 10:03:50 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:44.468 10:03:50 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:44.468 10:03:50 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:44.468 10:03:50 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:44.468 10:03:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.468 10:03:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.468 10:03:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.469 10:03:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:44.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.469 --rc genhtml_branch_coverage=1 00:04:44.469 --rc genhtml_function_coverage=1 00:04:44.469 --rc genhtml_legend=1 00:04:44.469 --rc geninfo_all_blocks=1 00:04:44.469 --rc geninfo_unexecuted_blocks=1 00:04:44.469 00:04:44.469 ' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:44.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.469 --rc genhtml_branch_coverage=1 00:04:44.469 --rc genhtml_function_coverage=1 00:04:44.469 --rc genhtml_legend=1 00:04:44.469 --rc geninfo_all_blocks=1 00:04:44.469 --rc geninfo_unexecuted_blocks=1 00:04:44.469 00:04:44.469 ' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:44.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.469 --rc genhtml_branch_coverage=1 00:04:44.469 --rc genhtml_function_coverage=1 00:04:44.469 --rc genhtml_legend=1 00:04:44.469 --rc geninfo_all_blocks=1 00:04:44.469 --rc geninfo_unexecuted_blocks=1 00:04:44.469 00:04:44.469 ' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:44.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.469 --rc genhtml_branch_coverage=1 00:04:44.469 --rc genhtml_function_coverage=1 00:04:44.469 --rc genhtml_legend=1 00:04:44.469 --rc geninfo_all_blocks=1 00:04:44.469 --rc geninfo_unexecuted_blocks=1 00:04:44.469 00:04:44.469 ' 00:04:44.469 10:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.469 10:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58063 00:04:44.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.469 10:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58063 00:04:44.469 10:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58063 ']' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.469 10:03:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.725 [2024-12-06 10:03:50.657905] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:44.725 [2024-12-06 10:03:50.658035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58063 ] 00:04:44.725 [2024-12-06 10:03:50.825030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.983 [2024-12-06 10:03:50.930481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.606 10:03:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.606 10:03:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:45.606 10:03:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.606 10:03:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58063 00:04:45.606 10:03:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58063 ']' 00:04:45.606 10:03:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58063 00:04:45.606 10:03:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58063 00:04:45.863 killing process with pid 58063 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58063' 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 58063 00:04:45.863 10:03:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 58063 00:04:47.232 00:04:47.232 real 0m2.895s 00:04:47.232 user 0m3.008s 00:04:47.232 sys 0m0.398s 00:04:47.232 10:03:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.232 10:03:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.232 ************************************ 00:04:47.232 END TEST alias_rpc 00:04:47.232 ************************************ 00:04:47.232 10:03:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:47.232 10:03:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:47.232 10:03:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.232 10:03:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.232 10:03:53 -- common/autotest_common.sh@10 -- # set +x 00:04:47.232 ************************************ 00:04:47.232 START TEST spdkcli_tcp 00:04:47.232 ************************************ 00:04:47.232 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:47.491 * Looking for test storage... 00:04:47.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.491 10:03:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.491 --rc genhtml_branch_coverage=1 00:04:47.491 --rc genhtml_function_coverage=1 00:04:47.491 --rc genhtml_legend=1 00:04:47.491 --rc geninfo_all_blocks=1 00:04:47.491 --rc geninfo_unexecuted_blocks=1 00:04:47.491 00:04:47.491 ' 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.491 --rc genhtml_branch_coverage=1 00:04:47.491 --rc genhtml_function_coverage=1 00:04:47.491 --rc genhtml_legend=1 00:04:47.491 --rc geninfo_all_blocks=1 00:04:47.491 --rc geninfo_unexecuted_blocks=1 00:04:47.491 00:04:47.491 ' 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.491 --rc genhtml_branch_coverage=1 00:04:47.491 --rc genhtml_function_coverage=1 00:04:47.491 --rc genhtml_legend=1 00:04:47.491 --rc geninfo_all_blocks=1 00:04:47.491 --rc geninfo_unexecuted_blocks=1 00:04:47.491 00:04:47.491 ' 00:04:47.491 10:03:53 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:47.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.491 --rc genhtml_branch_coverage=1 00:04:47.491 --rc genhtml_function_coverage=1 00:04:47.491 --rc genhtml_legend=1 00:04:47.491 --rc geninfo_all_blocks=1 00:04:47.491 --rc geninfo_unexecuted_blocks=1 00:04:47.491 00:04:47.491 ' 00:04:47.491 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:47.491 10:03:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:47.491 10:03:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58153 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58153 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58153 ']' 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.492 10:03:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.492 10:03:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.492 [2024-12-06 10:03:53.626637] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:47.492 [2024-12-06 10:03:53.626871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58153 ] 00:04:47.750 [2024-12-06 10:03:53.793468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.750 [2024-12-06 10:03:53.897687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.750 [2024-12-06 10:03:53.897765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.685 10:03:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.685 10:03:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:48.685 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.685 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58170 00:04:48.685 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.685 [ 00:04:48.685 "bdev_malloc_delete", 00:04:48.685 "bdev_malloc_create", 00:04:48.685 "bdev_null_resize", 00:04:48.685 "bdev_null_delete", 00:04:48.685 "bdev_null_create", 00:04:48.685 "bdev_nvme_cuse_unregister", 00:04:48.685 "bdev_nvme_cuse_register", 00:04:48.685 "bdev_opal_new_user", 00:04:48.685 "bdev_opal_set_lock_state", 00:04:48.685 "bdev_opal_delete", 00:04:48.685 "bdev_opal_get_info", 00:04:48.685 "bdev_opal_create", 00:04:48.685 "bdev_nvme_opal_revert", 00:04:48.685 "bdev_nvme_opal_init", 00:04:48.685 "bdev_nvme_send_cmd", 00:04:48.685 "bdev_nvme_set_keys", 00:04:48.685 "bdev_nvme_get_path_iostat", 00:04:48.685 "bdev_nvme_get_mdns_discovery_info", 00:04:48.685 "bdev_nvme_stop_mdns_discovery", 00:04:48.685 "bdev_nvme_start_mdns_discovery", 00:04:48.685 "bdev_nvme_set_multipath_policy", 00:04:48.685 "bdev_nvme_set_preferred_path", 00:04:48.685 "bdev_nvme_get_io_paths", 00:04:48.685 "bdev_nvme_remove_error_injection", 00:04:48.685 "bdev_nvme_add_error_injection", 00:04:48.685 "bdev_nvme_get_discovery_info", 00:04:48.685 "bdev_nvme_stop_discovery", 00:04:48.685 "bdev_nvme_start_discovery", 00:04:48.685 "bdev_nvme_get_controller_health_info", 00:04:48.685 "bdev_nvme_disable_controller", 00:04:48.685 "bdev_nvme_enable_controller", 00:04:48.685 "bdev_nvme_reset_controller", 00:04:48.685 "bdev_nvme_get_transport_statistics", 00:04:48.685 "bdev_nvme_apply_firmware", 00:04:48.685 "bdev_nvme_detach_controller", 00:04:48.685 "bdev_nvme_get_controllers", 00:04:48.685 "bdev_nvme_attach_controller", 00:04:48.685 "bdev_nvme_set_hotplug", 00:04:48.685 "bdev_nvme_set_options", 00:04:48.685 "bdev_passthru_delete", 00:04:48.685 "bdev_passthru_create", 00:04:48.685 "bdev_lvol_set_parent_bdev", 00:04:48.685 "bdev_lvol_set_parent", 00:04:48.685 "bdev_lvol_check_shallow_copy", 00:04:48.685 "bdev_lvol_start_shallow_copy", 00:04:48.685 "bdev_lvol_grow_lvstore", 00:04:48.685 "bdev_lvol_get_lvols", 00:04:48.685 "bdev_lvol_get_lvstores", 00:04:48.685 "bdev_lvol_delete", 00:04:48.685 "bdev_lvol_set_read_only", 00:04:48.685 "bdev_lvol_resize", 00:04:48.685 "bdev_lvol_decouple_parent", 00:04:48.685 "bdev_lvol_inflate", 00:04:48.686 "bdev_lvol_rename", 00:04:48.686 "bdev_lvol_clone_bdev", 00:04:48.686 "bdev_lvol_clone", 00:04:48.686 "bdev_lvol_snapshot", 00:04:48.686 "bdev_lvol_create", 00:04:48.686 "bdev_lvol_delete_lvstore", 00:04:48.686 "bdev_lvol_rename_lvstore", 00:04:48.686 "bdev_lvol_create_lvstore", 00:04:48.686 "bdev_raid_set_options", 00:04:48.686 "bdev_raid_remove_base_bdev", 00:04:48.686 "bdev_raid_add_base_bdev", 00:04:48.686 "bdev_raid_delete", 00:04:48.686 "bdev_raid_create", 00:04:48.686 "bdev_raid_get_bdevs", 00:04:48.686 "bdev_error_inject_error", 00:04:48.686 "bdev_error_delete", 00:04:48.686 "bdev_error_create", 00:04:48.686 "bdev_split_delete", 00:04:48.686 "bdev_split_create", 00:04:48.686 "bdev_delay_delete", 00:04:48.686 "bdev_delay_create", 00:04:48.686 "bdev_delay_update_latency", 00:04:48.686 "bdev_zone_block_delete", 00:04:48.686 "bdev_zone_block_create", 00:04:48.686 "blobfs_create", 00:04:48.686 "blobfs_detect", 00:04:48.686 "blobfs_set_cache_size", 00:04:48.686 "bdev_xnvme_delete", 00:04:48.686 "bdev_xnvme_create", 00:04:48.686 "bdev_aio_delete", 00:04:48.686 "bdev_aio_rescan", 00:04:48.686 "bdev_aio_create", 00:04:48.686 "bdev_ftl_set_property", 00:04:48.686 "bdev_ftl_get_properties", 00:04:48.686 "bdev_ftl_get_stats", 00:04:48.686 "bdev_ftl_unmap", 00:04:48.686 "bdev_ftl_unload", 00:04:48.686 "bdev_ftl_delete", 00:04:48.686 "bdev_ftl_load", 00:04:48.686 "bdev_ftl_create", 00:04:48.686 "bdev_virtio_attach_controller", 00:04:48.686 "bdev_virtio_scsi_get_devices", 00:04:48.686 "bdev_virtio_detach_controller", 00:04:48.686 "bdev_virtio_blk_set_hotplug", 00:04:48.686 "bdev_iscsi_delete", 00:04:48.686 "bdev_iscsi_create", 00:04:48.686 "bdev_iscsi_set_options", 00:04:48.686 "accel_error_inject_error", 00:04:48.686 "ioat_scan_accel_module", 00:04:48.686 "dsa_scan_accel_module", 00:04:48.686 "iaa_scan_accel_module", 00:04:48.686 "keyring_file_remove_key", 00:04:48.686 "keyring_file_add_key", 00:04:48.686 "keyring_linux_set_options", 00:04:48.686 "fsdev_aio_delete", 00:04:48.686 "fsdev_aio_create", 00:04:48.686 "iscsi_get_histogram", 00:04:48.686 "iscsi_enable_histogram", 00:04:48.686 "iscsi_set_options", 00:04:48.686 "iscsi_get_auth_groups", 00:04:48.686 "iscsi_auth_group_remove_secret", 00:04:48.686 "iscsi_auth_group_add_secret", 00:04:48.686 "iscsi_delete_auth_group", 00:04:48.686 "iscsi_create_auth_group", 00:04:48.686 "iscsi_set_discovery_auth", 00:04:48.686 "iscsi_get_options", 00:04:48.686 "iscsi_target_node_request_logout", 00:04:48.686 "iscsi_target_node_set_redirect", 00:04:48.686 "iscsi_target_node_set_auth", 00:04:48.686 "iscsi_target_node_add_lun", 00:04:48.686 "iscsi_get_stats", 00:04:48.686 "iscsi_get_connections", 00:04:48.686 "iscsi_portal_group_set_auth", 00:04:48.686 "iscsi_start_portal_group", 00:04:48.686 "iscsi_delete_portal_group", 00:04:48.686 "iscsi_create_portal_group", 00:04:48.686 "iscsi_get_portal_groups", 00:04:48.686 "iscsi_delete_target_node", 00:04:48.686 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.686 "iscsi_target_node_add_pg_ig_maps", 00:04:48.686 "iscsi_create_target_node", 00:04:48.686 "iscsi_get_target_nodes", 00:04:48.686 "iscsi_delete_initiator_group", 00:04:48.686 "iscsi_initiator_group_remove_initiators", 00:04:48.686 "iscsi_initiator_group_add_initiators", 00:04:48.686 "iscsi_create_initiator_group", 00:04:48.686 "iscsi_get_initiator_groups", 00:04:48.686 "nvmf_set_crdt", 00:04:48.686 "nvmf_set_config", 00:04:48.686 "nvmf_set_max_subsystems", 00:04:48.686 "nvmf_stop_mdns_prr", 00:04:48.686 "nvmf_publish_mdns_prr", 00:04:48.686 "nvmf_subsystem_get_listeners", 00:04:48.686 "nvmf_subsystem_get_qpairs", 00:04:48.686 "nvmf_subsystem_get_controllers", 00:04:48.686 "nvmf_get_stats", 00:04:48.686 "nvmf_get_transports", 00:04:48.686 "nvmf_create_transport", 00:04:48.686 "nvmf_get_targets", 00:04:48.686 "nvmf_delete_target", 00:04:48.686 "nvmf_create_target", 00:04:48.686 "nvmf_subsystem_allow_any_host", 00:04:48.686 "nvmf_subsystem_set_keys", 00:04:48.686 "nvmf_discovery_referral_remove_host", 00:04:48.686 "nvmf_discovery_referral_add_host", 00:04:48.686 "nvmf_subsystem_remove_host", 00:04:48.686 "nvmf_subsystem_add_host", 00:04:48.686 "nvmf_ns_remove_host", 00:04:48.686 "nvmf_ns_add_host", 00:04:48.686 "nvmf_subsystem_remove_ns", 00:04:48.686 "nvmf_subsystem_set_ns_ana_group", 00:04:48.686 "nvmf_subsystem_add_ns", 00:04:48.686 "nvmf_subsystem_listener_set_ana_state", 00:04:48.686 "nvmf_discovery_get_referrals", 00:04:48.686 "nvmf_discovery_remove_referral", 00:04:48.686 "nvmf_discovery_add_referral", 00:04:48.686 "nvmf_subsystem_remove_listener", 00:04:48.686 "nvmf_subsystem_add_listener", 00:04:48.686 "nvmf_delete_subsystem", 00:04:48.686 "nvmf_create_subsystem", 00:04:48.686 "nvmf_get_subsystems", 00:04:48.686 "env_dpdk_get_mem_stats", 00:04:48.686 "nbd_get_disks", 00:04:48.686 "nbd_stop_disk", 00:04:48.686 "nbd_start_disk", 00:04:48.686 "ublk_recover_disk", 00:04:48.686 "ublk_get_disks", 00:04:48.686 "ublk_stop_disk", 00:04:48.686 "ublk_start_disk", 00:04:48.686 "ublk_destroy_target", 00:04:48.686 "ublk_create_target", 00:04:48.686 "virtio_blk_create_transport", 00:04:48.686 "virtio_blk_get_transports", 00:04:48.686 "vhost_controller_set_coalescing", 00:04:48.686 "vhost_get_controllers", 00:04:48.686 "vhost_delete_controller", 00:04:48.686 "vhost_create_blk_controller", 00:04:48.686 "vhost_scsi_controller_remove_target", 00:04:48.686 "vhost_scsi_controller_add_target", 00:04:48.686 "vhost_start_scsi_controller", 00:04:48.686 "vhost_create_scsi_controller", 00:04:48.686 "thread_set_cpumask", 00:04:48.686 "scheduler_set_options", 00:04:48.686 "framework_get_governor", 00:04:48.686 "framework_get_scheduler", 00:04:48.686 "framework_set_scheduler", 00:04:48.686 "framework_get_reactors", 00:04:48.686 "thread_get_io_channels", 00:04:48.686 "thread_get_pollers", 00:04:48.686 "thread_get_stats", 00:04:48.686 "framework_monitor_context_switch", 00:04:48.686 "spdk_kill_instance", 00:04:48.686 "log_enable_timestamps", 00:04:48.686 "log_get_flags", 00:04:48.686 "log_clear_flag", 00:04:48.686 "log_set_flag", 00:04:48.686 "log_get_level", 00:04:48.686 "log_set_level", 00:04:48.686 "log_get_print_level", 00:04:48.686 "log_set_print_level", 00:04:48.686 "framework_enable_cpumask_locks", 00:04:48.687 "framework_disable_cpumask_locks", 00:04:48.687 "framework_wait_init", 00:04:48.687 "framework_start_init", 00:04:48.687 "scsi_get_devices", 00:04:48.687 "bdev_get_histogram", 00:04:48.687 "bdev_enable_histogram", 00:04:48.687 "bdev_set_qos_limit", 00:04:48.687 "bdev_set_qd_sampling_period", 00:04:48.687 "bdev_get_bdevs", 00:04:48.687 "bdev_reset_iostat", 00:04:48.687 "bdev_get_iostat", 00:04:48.687 "bdev_examine", 00:04:48.687 "bdev_wait_for_examine", 00:04:48.687 "bdev_set_options", 00:04:48.687 "accel_get_stats", 00:04:48.687 "accel_set_options", 00:04:48.687 "accel_set_driver", 00:04:48.687 "accel_crypto_key_destroy", 00:04:48.687 "accel_crypto_keys_get", 00:04:48.687 "accel_crypto_key_create", 00:04:48.687 "accel_assign_opc", 00:04:48.687 "accel_get_module_info", 00:04:48.687 "accel_get_opc_assignments", 00:04:48.687 "vmd_rescan", 00:04:48.687 "vmd_remove_device", 00:04:48.687 "vmd_enable", 00:04:48.687 "sock_get_default_impl", 00:04:48.687 "sock_set_default_impl", 00:04:48.687 "sock_impl_set_options", 00:04:48.687 "sock_impl_get_options", 00:04:48.687 "iobuf_get_stats", 00:04:48.687 "iobuf_set_options", 00:04:48.687 "keyring_get_keys", 00:04:48.687 "framework_get_pci_devices", 00:04:48.687 "framework_get_config", 00:04:48.687 "framework_get_subsystems", 00:04:48.687 "fsdev_set_opts", 00:04:48.687 "fsdev_get_opts", 00:04:48.687 "trace_get_info", 00:04:48.687 "trace_get_tpoint_group_mask", 00:04:48.687 "trace_disable_tpoint_group", 00:04:48.687 "trace_enable_tpoint_group", 00:04:48.687 "trace_clear_tpoint_mask", 00:04:48.687 "trace_set_tpoint_mask", 00:04:48.687 "notify_get_notifications", 00:04:48.687 "notify_get_types", 00:04:48.687 "spdk_get_version", 00:04:48.687 "rpc_get_methods" 00:04:48.687 ] 00:04:48.687 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.687 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.687 10:03:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58153 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58153 ']' 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58153 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58153 00:04:48.687 killing process with pid 58153 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58153' 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58153 00:04:48.687 10:03:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58153 00:04:50.587 ************************************ 00:04:50.587 END TEST spdkcli_tcp 00:04:50.587 ************************************ 00:04:50.587 00:04:50.587 real 0m2.949s 00:04:50.587 user 0m5.339s 00:04:50.587 sys 0m0.452s 00:04:50.587 10:03:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.587 10:03:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.587 10:03:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.587 10:03:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.587 10:03:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.587 10:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.587 ************************************ 00:04:50.587 START TEST dpdk_mem_utility 00:04:50.587 ************************************ 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.587 * Looking for test storage... 00:04:50.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:50.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.587 10:03:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.587 --rc genhtml_branch_coverage=1 00:04:50.587 --rc genhtml_function_coverage=1 00:04:50.587 --rc genhtml_legend=1 00:04:50.587 --rc geninfo_all_blocks=1 00:04:50.587 --rc geninfo_unexecuted_blocks=1 00:04:50.587 00:04:50.587 ' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.587 --rc genhtml_branch_coverage=1 00:04:50.587 --rc genhtml_function_coverage=1 00:04:50.587 --rc genhtml_legend=1 00:04:50.587 --rc geninfo_all_blocks=1 00:04:50.587 --rc geninfo_unexecuted_blocks=1 00:04:50.587 00:04:50.587 ' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.587 --rc genhtml_branch_coverage=1 00:04:50.587 --rc genhtml_function_coverage=1 00:04:50.587 --rc genhtml_legend=1 00:04:50.587 --rc geninfo_all_blocks=1 00:04:50.587 --rc geninfo_unexecuted_blocks=1 00:04:50.587 00:04:50.587 ' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.587 --rc genhtml_branch_coverage=1 00:04:50.587 --rc genhtml_function_coverage=1 00:04:50.587 --rc genhtml_legend=1 00:04:50.587 --rc geninfo_all_blocks=1 00:04:50.587 --rc geninfo_unexecuted_blocks=1 00:04:50.587 00:04:50.587 ' 00:04:50.587 10:03:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.587 10:03:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58263 00:04:50.587 10:03:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58263 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58263 ']' 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.587 10:03:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.587 10:03:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.587 [2024-12-06 10:03:56.621215] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:50.587 [2024-12-06 10:03:56.621339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58263 ] 00:04:50.845 [2024-12-06 10:03:56.780377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.845 [2024-12-06 10:03:56.882755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.410 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.410 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:51.411 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.411 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.411 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.411 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.411 { 00:04:51.411 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.411 } 00:04:51.411 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.411 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.411 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:51.411 1 heaps totaling size 824.000000 MiB 00:04:51.411 size: 824.000000 MiB heap id: 0 00:04:51.411 end heaps---------- 00:04:51.411 9 mempools totaling size 603.782043 MiB 00:04:51.411 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:51.411 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:51.411 size: 100.555481 MiB name: bdev_io_58263 00:04:51.411 size: 50.003479 MiB name: msgpool_58263 00:04:51.411 size: 36.509338 MiB name: fsdev_io_58263 00:04:51.411 size: 21.763794 MiB name: PDU_Pool 00:04:51.411 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:51.411 size: 4.133484 MiB name: evtpool_58263 00:04:51.411 size: 0.026123 MiB name: Session_Pool 00:04:51.411 end mempools------- 00:04:51.411 6 memzones totaling size 4.142822 MiB 00:04:51.411 size: 1.000366 MiB name: RG_ring_0_58263 00:04:51.411 size: 1.000366 MiB name: RG_ring_1_58263 00:04:51.411 size: 1.000366 MiB name: RG_ring_4_58263 00:04:51.411 size: 1.000366 MiB name: RG_ring_5_58263 00:04:51.411 size: 0.125366 MiB name: RG_ring_2_58263 00:04:51.411 size: 0.015991 MiB name: RG_ring_3_58263 00:04:51.411 end memzones------- 00:04:51.411 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:51.720 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:04:51.720 list of free elements. size: 16.778687 MiB 00:04:51.720 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:51.720 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:51.720 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:51.720 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:51.720 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:51.720 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:51.720 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:51.720 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:51.720 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:51.720 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:51.720 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:51.720 element at address: 0x20001b400000 with size: 0.560242 MiB 00:04:51.720 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:51.720 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:51.720 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:51.720 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:51.720 element at address: 0x200028800000 with size: 0.390442 MiB 00:04:51.720 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:51.720 list of standard malloc elements. size: 199.290405 MiB 00:04:51.720 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:51.720 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:51.720 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:51.720 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:51.720 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:51.720 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:51.720 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:51.720 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:51.720 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:51.720 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:51.720 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:51.720 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:51.720 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:51.720 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:51.721 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:51.722 element at address: 0x200028863f40 with size: 0.000244 MiB 00:04:51.722 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886af80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:51.722 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:51.723 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:51.723 list of memzone associated elements. size: 607.930908 MiB 00:04:51.723 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:51.723 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:51.723 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:51.723 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:51.723 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:51.723 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58263_0 00:04:51.723 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:51.723 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58263_0 00:04:51.723 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:51.723 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58263_0 00:04:51.723 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:51.723 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:51.723 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:51.723 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:51.723 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:51.723 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58263_0 00:04:51.723 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:51.723 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58263 00:04:51.723 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:51.723 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58263 00:04:51.723 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:51.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:51.723 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:51.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:51.723 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:51.723 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:51.723 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:51.723 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:51.723 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:51.723 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58263 00:04:51.723 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:51.723 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58263 00:04:51.723 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:51.723 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58263 00:04:51.723 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:51.723 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58263 00:04:51.723 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:51.723 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58263 00:04:51.723 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:51.723 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58263 00:04:51.723 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:51.723 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:51.723 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:51.723 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:51.723 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:51.723 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:51.723 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:51.723 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58263 00:04:51.723 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:51.723 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58263 00:04:51.723 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:51.723 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:51.723 element at address: 0x200028864140 with size: 0.023804 MiB 00:04:51.723 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:51.723 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:51.723 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58263 00:04:51.723 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:04:51.723 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:51.723 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:51.723 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58263 00:04:51.723 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:51.723 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58263 00:04:51.724 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:51.724 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58263 00:04:51.724 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:04:51.724 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:51.724 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:51.724 10:03:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58263 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58263 ']' 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58263 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58263 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58263' 00:04:51.724 killing process with pid 58263 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58263 00:04:51.724 10:03:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58263 00:04:53.098 00:04:53.098 real 0m2.752s 00:04:53.098 user 0m2.736s 00:04:53.098 sys 0m0.424s 00:04:53.098 ************************************ 00:04:53.098 END TEST dpdk_mem_utility 00:04:53.098 ************************************ 00:04:53.098 10:03:59 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.098 10:03:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.098 10:03:59 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.098 10:03:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.098 10:03:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.098 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.098 ************************************ 00:04:53.098 START TEST event 00:04:53.098 ************************************ 00:04:53.098 10:03:59 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:53.370 * Looking for test storage... 00:04:53.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.370 10:03:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.370 10:03:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.370 10:03:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.370 10:03:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.370 10:03:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.370 10:03:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.370 10:03:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.370 10:03:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.370 10:03:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.370 10:03:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.370 10:03:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.370 10:03:59 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.370 10:03:59 event -- scripts/common.sh@345 -- # : 1 00:04:53.370 10:03:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.370 10:03:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.370 10:03:59 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.370 10:03:59 event -- scripts/common.sh@353 -- # local d=1 00:04:53.370 10:03:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.370 10:03:59 event -- scripts/common.sh@355 -- # echo 1 00:04:53.370 10:03:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.370 10:03:59 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.370 10:03:59 event -- scripts/common.sh@353 -- # local d=2 00:04:53.370 10:03:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.370 10:03:59 event -- scripts/common.sh@355 -- # echo 2 00:04:53.370 10:03:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.370 10:03:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.370 10:03:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.370 10:03:59 event -- scripts/common.sh@368 -- # return 0 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.370 --rc genhtml_branch_coverage=1 00:04:53.370 --rc genhtml_function_coverage=1 00:04:53.370 --rc genhtml_legend=1 00:04:53.370 --rc geninfo_all_blocks=1 00:04:53.370 --rc geninfo_unexecuted_blocks=1 00:04:53.370 00:04:53.370 ' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.370 --rc genhtml_branch_coverage=1 00:04:53.370 --rc genhtml_function_coverage=1 00:04:53.370 --rc genhtml_legend=1 00:04:53.370 --rc geninfo_all_blocks=1 00:04:53.370 --rc geninfo_unexecuted_blocks=1 00:04:53.370 00:04:53.370 ' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.370 --rc genhtml_branch_coverage=1 00:04:53.370 --rc genhtml_function_coverage=1 00:04:53.370 --rc genhtml_legend=1 00:04:53.370 --rc geninfo_all_blocks=1 00:04:53.370 --rc geninfo_unexecuted_blocks=1 00:04:53.370 00:04:53.370 ' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.370 --rc genhtml_branch_coverage=1 00:04:53.370 --rc genhtml_function_coverage=1 00:04:53.370 --rc genhtml_legend=1 00:04:53.370 --rc geninfo_all_blocks=1 00:04:53.370 --rc geninfo_unexecuted_blocks=1 00:04:53.370 00:04:53.370 ' 00:04:53.370 10:03:59 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:53.370 10:03:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.370 10:03:59 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:53.370 10:03:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.370 10:03:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.370 ************************************ 00:04:53.370 START TEST event_perf 00:04:53.370 ************************************ 00:04:53.370 10:03:59 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.370 Running I/O for 1 seconds...[2024-12-06 10:03:59.392399] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:53.370 [2024-12-06 10:03:59.392615] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 00:04:53.638 [2024-12-06 10:03:59.553085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.638 [2024-12-06 10:03:59.660069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.638 [2024-12-06 10:03:59.660483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.638 [2024-12-06 10:03:59.660713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.638 [2024-12-06 10:03:59.660856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.010 Running I/O for 1 seconds... 00:04:55.010 lcore 0: 197152 00:04:55.010 lcore 1: 197153 00:04:55.010 lcore 2: 197150 00:04:55.010 lcore 3: 197153 00:04:55.010 done. 00:04:55.010 00:04:55.010 real 0m1.470s 00:04:55.010 user 0m4.264s 00:04:55.010 sys 0m0.086s 00:04:55.010 10:04:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.010 10:04:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.010 ************************************ 00:04:55.010 END TEST event_perf 00:04:55.010 ************************************ 00:04:55.010 10:04:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.010 10:04:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.010 10:04:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.010 10:04:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.010 ************************************ 00:04:55.010 START TEST event_reactor 00:04:55.010 ************************************ 00:04:55.010 10:04:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.010 [2024-12-06 10:04:00.925504] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:55.010 [2024-12-06 10:04:00.925615] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58401 ] 00:04:55.010 [2024-12-06 10:04:01.082370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.269 [2024-12-06 10:04:01.183299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.201 test_start 00:04:56.201 oneshot 00:04:56.201 tick 100 00:04:56.201 tick 100 00:04:56.201 tick 250 00:04:56.201 tick 100 00:04:56.201 tick 100 00:04:56.201 tick 100 00:04:56.201 tick 250 00:04:56.201 tick 500 00:04:56.201 tick 100 00:04:56.202 tick 100 00:04:56.202 tick 250 00:04:56.202 tick 100 00:04:56.202 tick 100 00:04:56.202 test_end 00:04:56.202 00:04:56.202 real 0m1.452s 00:04:56.202 user 0m1.264s 00:04:56.202 sys 0m0.079s 00:04:56.202 10:04:02 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.202 ************************************ 00:04:56.202 END TEST event_reactor 00:04:56.202 ************************************ 00:04:56.202 10:04:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:56.460 10:04:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.460 10:04:02 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:56.460 10:04:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.460 10:04:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.460 ************************************ 00:04:56.460 START TEST event_reactor_perf 00:04:56.460 ************************************ 00:04:56.460 10:04:02 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.460 [2024-12-06 10:04:02.444725] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:56.460 [2024-12-06 10:04:02.444834] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58432 ] 00:04:56.460 [2024-12-06 10:04:02.597876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.718 [2024-12-06 10:04:02.700528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.093 test_start 00:04:58.093 test_end 00:04:58.093 Performance: 317944 events per second 00:04:58.093 00:04:58.093 real 0m1.445s 00:04:58.093 user 0m1.266s 00:04:58.093 sys 0m0.072s 00:04:58.093 ************************************ 00:04:58.093 END TEST event_reactor_perf 00:04:58.093 ************************************ 00:04:58.093 10:04:03 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.093 10:04:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.093 10:04:03 event -- event/event.sh@49 -- # uname -s 00:04:58.093 10:04:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.093 10:04:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.093 10:04:03 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.093 10:04:03 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.093 10:04:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.093 ************************************ 00:04:58.093 START TEST event_scheduler 00:04:58.093 ************************************ 00:04:58.093 10:04:03 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.093 * Looking for test storage... 00:04:58.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.093 10:04:03 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.093 10:04:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.093 10:04:03 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.093 10:04:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.093 --rc genhtml_branch_coverage=1 00:04:58.093 --rc genhtml_function_coverage=1 00:04:58.093 --rc genhtml_legend=1 00:04:58.093 --rc geninfo_all_blocks=1 00:04:58.093 --rc geninfo_unexecuted_blocks=1 00:04:58.093 00:04:58.093 ' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.093 --rc genhtml_branch_coverage=1 00:04:58.093 --rc genhtml_function_coverage=1 00:04:58.093 --rc genhtml_legend=1 00:04:58.093 --rc geninfo_all_blocks=1 00:04:58.093 --rc geninfo_unexecuted_blocks=1 00:04:58.093 00:04:58.093 ' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.093 --rc genhtml_branch_coverage=1 00:04:58.093 --rc genhtml_function_coverage=1 00:04:58.093 --rc genhtml_legend=1 00:04:58.093 --rc geninfo_all_blocks=1 00:04:58.093 --rc geninfo_unexecuted_blocks=1 00:04:58.093 00:04:58.093 ' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.093 --rc genhtml_branch_coverage=1 00:04:58.093 --rc genhtml_function_coverage=1 00:04:58.093 --rc genhtml_legend=1 00:04:58.093 --rc geninfo_all_blocks=1 00:04:58.093 --rc geninfo_unexecuted_blocks=1 00:04:58.093 00:04:58.093 ' 00:04:58.093 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.093 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58508 00:04:58.093 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.093 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58508 00:04:58.093 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58508 ']' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.093 10:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.093 [2024-12-06 10:04:04.131421] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:04:58.093 [2024-12-06 10:04:04.131608] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58508 ] 00:04:58.351 [2024-12-06 10:04:04.296580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.351 [2024-12-06 10:04:04.406170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.351 [2024-12-06 10:04:04.406683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.351 [2024-12-06 10:04:04.407224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.351 [2024-12-06 10:04:04.407430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:58.918 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.918 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.918 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.918 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.918 POWER: Cannot set governor of lcore 0 to performance 00:04:58.918 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.918 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.918 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:58.918 POWER: Cannot set governor of lcore 0 to userspace 00:04:58.918 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:58.918 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:58.918 POWER: Unable to set Power Management Environment for lcore 0 00:04:58.918 [2024-12-06 10:04:04.992720] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:58.918 [2024-12-06 10:04:04.992743] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:58.918 [2024-12-06 10:04:04.992754] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:58.918 [2024-12-06 10:04:04.992771] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:58.918 [2024-12-06 10:04:04.992780] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:58.918 [2024-12-06 10:04:04.992790] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.918 10:04:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.918 10:04:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 [2024-12-06 10:04:05.222239] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.174 10:04:05 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.174 10:04:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.174 10:04:05 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 ************************************ 00:04:59.174 START TEST scheduler_create_thread 00:04:59.174 ************************************ 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 2 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 3 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 4 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 5 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 6 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 7 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 8 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 9 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.174 10 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.174 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.175 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.431 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.431 10:04:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.431 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.431 10:04:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.803 10:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.803 10:04:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:00.803 10:04:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:00.803 10:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.803 10:04:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.739 10:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.739 ************************************ 00:05:01.739 END TEST scheduler_create_thread 00:05:01.739 ************************************ 00:05:01.739 00:05:01.739 real 0m2.616s 00:05:01.739 user 0m0.019s 00:05:01.739 sys 0m0.004s 00:05:01.739 10:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.739 10:04:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.739 10:04:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:01.739 10:04:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58508 00:05:01.739 10:04:07 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58508 ']' 00:05:01.739 10:04:07 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58508 00:05:01.739 10:04:07 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:01.739 10:04:07 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.739 10:04:07 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58508 00:05:01.998 10:04:07 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:01.998 killing process with pid 58508 00:05:01.998 10:04:07 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:01.998 10:04:07 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58508' 00:05:01.998 10:04:07 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58508 00:05:01.998 10:04:07 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58508 00:05:02.256 [2024-12-06 10:04:08.337412] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.260 00:05:03.260 real 0m5.167s 00:05:03.260 user 0m9.090s 00:05:03.260 sys 0m0.338s 00:05:03.260 10:04:09 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.260 10:04:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.260 ************************************ 00:05:03.260 END TEST event_scheduler 00:05:03.260 ************************************ 00:05:03.260 10:04:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.260 10:04:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.260 10:04:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.260 10:04:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.260 10:04:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.260 ************************************ 00:05:03.260 START TEST app_repeat 00:05:03.260 ************************************ 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58614 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.260 Process app_repeat pid: 58614 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58614' 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.260 spdk_app_start Round 0 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58614 /var/tmp/spdk-nbd.sock 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58614 ']' 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.260 10:04:09 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.260 10:04:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.261 [2024-12-06 10:04:09.182496] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:03.261 [2024-12-06 10:04:09.182616] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58614 ] 00:05:03.261 [2024-12-06 10:04:09.339363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.519 [2024-12-06 10:04:09.442833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.519 [2024-12-06 10:04:09.442852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.086 10:04:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.086 10:04:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.086 10:04:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.344 Malloc0 00:05:04.344 10:04:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.602 Malloc1 00:05:04.602 10:04:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.602 10:04:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.859 /dev/nbd0 00:05:04.859 10:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.859 10:04:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.859 1+0 records in 00:05:04.859 1+0 records out 00:05:04.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023606 s, 17.4 MB/s 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.859 10:04:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.859 10:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.859 10:04:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.859 10:04:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.859 /dev/nbd1 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.117 1+0 records in 00:05:05.117 1+0 records out 00:05:05.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235495 s, 17.4 MB/s 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.117 10:04:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.117 { 00:05:05.117 "nbd_device": "/dev/nbd0", 00:05:05.117 "bdev_name": "Malloc0" 00:05:05.117 }, 00:05:05.117 { 00:05:05.117 "nbd_device": "/dev/nbd1", 00:05:05.117 "bdev_name": "Malloc1" 00:05:05.117 } 00:05:05.117 ]' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.117 { 00:05:05.117 "nbd_device": "/dev/nbd0", 00:05:05.117 "bdev_name": "Malloc0" 00:05:05.117 }, 00:05:05.117 { 00:05:05.117 "nbd_device": "/dev/nbd1", 00:05:05.117 "bdev_name": "Malloc1" 00:05:05.117 } 00:05:05.117 ]' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.117 /dev/nbd1' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.117 /dev/nbd1' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.117 10:04:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.376 256+0 records in 00:05:05.376 256+0 records out 00:05:05.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00843837 s, 124 MB/s 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.376 256+0 records in 00:05:05.376 256+0 records out 00:05:05.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208175 s, 50.4 MB/s 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.376 256+0 records in 00:05:05.376 256+0 records out 00:05:05.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248449 s, 42.2 MB/s 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.376 10:04:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.633 10:04:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.197 10:04:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.197 10:04:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.453 10:04:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.017 [2024-12-06 10:04:13.047031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.017 [2024-12-06 10:04:13.131128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.017 [2024-12-06 10:04:13.131159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.298 [2024-12-06 10:04:13.236864] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.298 [2024-12-06 10:04:13.236946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.821 10:04:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.821 spdk_app_start Round 1 00:05:09.821 10:04:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.821 10:04:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58614 /var/tmp/spdk-nbd.sock 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58614 ']' 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.821 10:04:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.821 10:04:15 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.821 Malloc0 00:05:09.821 10:04:15 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.078 Malloc1 00:05:10.079 10:04:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.079 10:04:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.337 /dev/nbd0 00:05:10.337 10:04:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.337 10:04:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.337 1+0 records in 00:05:10.337 1+0 records out 00:05:10.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254933 s, 16.1 MB/s 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.337 10:04:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.337 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.337 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.337 10:04:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.596 /dev/nbd1 00:05:10.596 10:04:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.596 10:04:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.596 10:04:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.596 1+0 records in 00:05:10.596 1+0 records out 00:05:10.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249624 s, 16.4 MB/s 00:05:10.597 10:04:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.597 10:04:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.597 10:04:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.597 10:04:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.597 10:04:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.597 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.597 10:04:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.597 10:04:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.597 10:04:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.597 10:04:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.854 { 00:05:10.854 "nbd_device": "/dev/nbd0", 00:05:10.854 "bdev_name": "Malloc0" 00:05:10.854 }, 00:05:10.854 { 00:05:10.854 "nbd_device": "/dev/nbd1", 00:05:10.854 "bdev_name": "Malloc1" 00:05:10.854 } 00:05:10.854 ]' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.854 { 00:05:10.854 "nbd_device": "/dev/nbd0", 00:05:10.854 "bdev_name": "Malloc0" 00:05:10.854 }, 00:05:10.854 { 00:05:10.854 "nbd_device": "/dev/nbd1", 00:05:10.854 "bdev_name": "Malloc1" 00:05:10.854 } 00:05:10.854 ]' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.854 /dev/nbd1' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.854 /dev/nbd1' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.854 10:04:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.855 256+0 records in 00:05:10.855 256+0 records out 00:05:10.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423213 s, 248 MB/s 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.855 256+0 records in 00:05:10.855 256+0 records out 00:05:10.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174679 s, 60.0 MB/s 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.855 256+0 records in 00:05:10.855 256+0 records out 00:05:10.855 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170448 s, 61.5 MB/s 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.855 10:04:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.113 10:04:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.370 10:04:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.628 10:04:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.628 10:04:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.886 10:04:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.452 [2024-12-06 10:04:18.440208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.452 [2024-12-06 10:04:18.526423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.452 [2024-12-06 10:04:18.526424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.710 [2024-12-06 10:04:18.632813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.710 [2024-12-06 10:04:18.632900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.337 spdk_app_start Round 2 00:05:15.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.337 10:04:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.337 10:04:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:15.337 10:04:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58614 /var/tmp/spdk-nbd.sock 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58614 ']' 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.337 10:04:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.337 10:04:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.337 10:04:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.337 10:04:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.337 Malloc0 00:05:15.337 10:04:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.611 Malloc1 00:05:15.611 10:04:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.611 10:04:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.611 /dev/nbd0 00:05:15.869 10:04:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.869 10:04:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.869 1+0 records in 00:05:15.869 1+0 records out 00:05:15.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182128 s, 22.5 MB/s 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.869 10:04:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.869 10:04:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.869 10:04:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.869 10:04:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.869 /dev/nbd1 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.869 1+0 records in 00:05:15.869 1+0 records out 00:05:15.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228584 s, 17.9 MB/s 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.869 10:04:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.869 10:04:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.128 10:04:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.128 { 00:05:16.128 "nbd_device": "/dev/nbd0", 00:05:16.128 "bdev_name": "Malloc0" 00:05:16.128 }, 00:05:16.128 { 00:05:16.128 "nbd_device": "/dev/nbd1", 00:05:16.128 "bdev_name": "Malloc1" 00:05:16.128 } 00:05:16.128 ]' 00:05:16.128 10:04:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.128 { 00:05:16.128 "nbd_device": "/dev/nbd0", 00:05:16.128 "bdev_name": "Malloc0" 00:05:16.128 }, 00:05:16.128 { 00:05:16.128 "nbd_device": "/dev/nbd1", 00:05:16.128 "bdev_name": "Malloc1" 00:05:16.128 } 00:05:16.128 ]' 00:05:16.128 10:04:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.386 /dev/nbd1' 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.386 /dev/nbd1' 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.386 256+0 records in 00:05:16.386 256+0 records out 00:05:16.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480229 s, 218 MB/s 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.386 256+0 records in 00:05:16.386 256+0 records out 00:05:16.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200163 s, 52.4 MB/s 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.386 10:04:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.386 256+0 records in 00:05:16.387 256+0 records out 00:05:16.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166383 s, 63.0 MB/s 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.387 10:04:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.644 10:04:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.901 10:04:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.901 10:04:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.902 10:04:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.466 10:04:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.032 [2024-12-06 10:04:23.927017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.032 [2024-12-06 10:04:24.013520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.032 [2024-12-06 10:04:24.013557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.032 [2024-12-06 10:04:24.119861] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.032 [2024-12-06 10:04:24.119928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.650 10:04:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58614 /var/tmp/spdk-nbd.sock 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58614 ']' 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.650 10:04:26 event.app_repeat -- event/event.sh@39 -- # killprocess 58614 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58614 ']' 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58614 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58614 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.650 killing process with pid 58614 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58614' 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58614 00:05:20.650 10:04:26 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58614 00:05:21.217 spdk_app_start is called in Round 0. 00:05:21.217 Shutdown signal received, stop current app iteration 00:05:21.217 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:05:21.217 spdk_app_start is called in Round 1. 00:05:21.217 Shutdown signal received, stop current app iteration 00:05:21.217 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:05:21.217 spdk_app_start is called in Round 2. 00:05:21.217 Shutdown signal received, stop current app iteration 00:05:21.217 Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 reinitialization... 00:05:21.217 spdk_app_start is called in Round 3. 00:05:21.217 Shutdown signal received, stop current app iteration 00:05:21.217 10:04:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:21.217 10:04:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:21.217 00:05:21.217 real 0m17.985s 00:05:21.217 user 0m39.426s 00:05:21.217 sys 0m2.168s 00:05:21.217 10:04:27 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.217 10:04:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.217 ************************************ 00:05:21.217 END TEST app_repeat 00:05:21.217 ************************************ 00:05:21.217 10:04:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:21.217 10:04:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:21.217 10:04:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.217 10:04:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.217 10:04:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.217 ************************************ 00:05:21.217 START TEST cpu_locks 00:05:21.217 ************************************ 00:05:21.217 10:04:27 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:21.217 * Looking for test storage... 00:05:21.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:21.217 10:04:27 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.217 10:04:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.217 10:04:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.217 10:04:27 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.217 10:04:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.218 10:04:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.218 --rc genhtml_branch_coverage=1 00:05:21.218 --rc genhtml_function_coverage=1 00:05:21.218 --rc genhtml_legend=1 00:05:21.218 --rc geninfo_all_blocks=1 00:05:21.218 --rc geninfo_unexecuted_blocks=1 00:05:21.218 00:05:21.218 ' 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.218 --rc genhtml_branch_coverage=1 00:05:21.218 --rc genhtml_function_coverage=1 00:05:21.218 --rc genhtml_legend=1 00:05:21.218 --rc geninfo_all_blocks=1 00:05:21.218 --rc geninfo_unexecuted_blocks=1 00:05:21.218 00:05:21.218 ' 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.218 --rc genhtml_branch_coverage=1 00:05:21.218 --rc genhtml_function_coverage=1 00:05:21.218 --rc genhtml_legend=1 00:05:21.218 --rc geninfo_all_blocks=1 00:05:21.218 --rc geninfo_unexecuted_blocks=1 00:05:21.218 00:05:21.218 ' 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.218 --rc genhtml_branch_coverage=1 00:05:21.218 --rc genhtml_function_coverage=1 00:05:21.218 --rc genhtml_legend=1 00:05:21.218 --rc geninfo_all_blocks=1 00:05:21.218 --rc geninfo_unexecuted_blocks=1 00:05:21.218 00:05:21.218 ' 00:05:21.218 10:04:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:21.218 10:04:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:21.218 10:04:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:21.218 10:04:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.218 10:04:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.218 ************************************ 00:05:21.218 START TEST default_locks 00:05:21.218 ************************************ 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59045 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59045 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59045 ']' 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.218 10:04:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.476 [2024-12-06 10:04:27.413531] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:21.476 [2024-12-06 10:04:27.413694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59045 ] 00:05:21.476 [2024-12-06 10:04:27.586020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.734 [2024-12-06 10:04:27.672326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59045 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59045 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59045 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59045 ']' 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59045 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.300 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59045 00:05:22.557 killing process with pid 59045 00:05:22.557 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.557 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.557 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59045' 00:05:22.557 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59045 00:05:22.557 10:04:28 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59045 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59045 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59045 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59045 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59045 ']' 00:05:23.924 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59045) - No such process 00:05:23.925 ERROR: process (pid: 59045) is no longer running 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.925 00:05:23.925 real 0m2.395s 00:05:23.925 user 0m2.324s 00:05:23.925 sys 0m0.471s 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.925 10:04:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.925 ************************************ 00:05:23.925 END TEST default_locks 00:05:23.925 ************************************ 00:05:23.925 10:04:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.925 10:04:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.925 10:04:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.925 10:04:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.925 ************************************ 00:05:23.925 START TEST default_locks_via_rpc 00:05:23.925 ************************************ 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59102 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59102 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59102 ']' 00:05:23.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.925 10:04:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.925 [2024-12-06 10:04:29.825695] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:23.925 [2024-12-06 10:04:29.825830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:05:23.925 [2024-12-06 10:04:29.982082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.925 [2024-12-06 10:04:30.073863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59102 ']' 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59102 00:05:24.855 killing process with pid 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59102' 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59102 00:05:24.855 10:04:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59102 00:05:26.277 00:05:26.277 real 0m2.436s 00:05:26.277 user 0m2.459s 00:05:26.277 sys 0m0.463s 00:05:26.277 10:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.277 ************************************ 00:05:26.277 END TEST default_locks_via_rpc 00:05:26.277 10:04:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.277 ************************************ 00:05:26.277 10:04:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:26.277 10:04:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.277 10:04:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.277 10:04:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.277 ************************************ 00:05:26.277 START TEST non_locking_app_on_locked_coremask 00:05:26.277 ************************************ 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59155 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59155 /var/tmp/spdk.sock 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59155 ']' 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.277 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.278 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.278 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.278 10:04:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.278 [2024-12-06 10:04:32.307177] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:26.278 [2024-12-06 10:04:32.307355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59155 ] 00:05:26.535 [2024-12-06 10:04:32.478891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.535 [2024-12-06 10:04:32.581921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59171 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59171 /var/tmp/spdk2.sock 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59171 ']' 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.098 10:04:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.354 [2024-12-06 10:04:33.268864] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:27.354 [2024-12-06 10:04:33.269209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59171 ] 00:05:27.354 [2024-12-06 10:04:33.444830] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:27.354 [2024-12-06 10:04:33.444904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.612 [2024-12-06 10:04:33.651899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.985 10:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.985 10:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.985 10:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59155 00:05:28.985 10:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59155 00:05:28.985 10:04:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59155 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59155 ']' 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59155 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59155 00:05:28.985 killing process with pid 59155 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59155' 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59155 00:05:28.985 10:04:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59155 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59171 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59171 ']' 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59171 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59171 00:05:32.262 killing process with pid 59171 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59171' 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59171 00:05:32.262 10:04:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59171 00:05:33.195 00:05:33.195 real 0m6.864s 00:05:33.195 user 0m7.133s 00:05:33.195 sys 0m0.859s 00:05:33.195 10:04:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.195 10:04:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.195 ************************************ 00:05:33.195 END TEST non_locking_app_on_locked_coremask 00:05:33.195 ************************************ 00:05:33.195 10:04:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.195 10:04:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.195 10:04:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.195 10:04:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.195 ************************************ 00:05:33.195 START TEST locking_app_on_unlocked_coremask 00:05:33.195 ************************************ 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:33.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59273 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59273 /var/tmp/spdk.sock 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59273 ']' 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.195 10:04:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.195 [2024-12-06 10:04:39.217848] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:33.195 [2024-12-06 10:04:39.217965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59273 ] 00:05:33.453 [2024-12-06 10:04:39.372223] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.453 [2024-12-06 10:04:39.372269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.453 [2024-12-06 10:04:39.457925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59284 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59284 /var/tmp/spdk2.sock 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59284 ']' 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:34.018 10:04:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.018 [2024-12-06 10:04:40.127904] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:34.018 [2024-12-06 10:04:40.128023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59284 ] 00:05:34.275 [2024-12-06 10:04:40.290866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.533 [2024-12-06 10:04:40.464593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.465 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.465 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.465 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59284 00:05:35.465 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59284 00:05:35.465 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59273 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59273 ']' 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59273 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59273 00:05:35.722 killing process with pid 59273 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59273' 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59273 00:05:35.722 10:04:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59273 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59284 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59284 ']' 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59284 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59284 00:05:38.273 killing process with pid 59284 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59284' 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59284 00:05:38.273 10:04:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59284 00:05:39.644 ************************************ 00:05:39.644 END TEST locking_app_on_unlocked_coremask 00:05:39.644 ************************************ 00:05:39.644 00:05:39.644 real 0m6.462s 00:05:39.644 user 0m6.711s 00:05:39.644 sys 0m0.854s 00:05:39.644 10:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.644 10:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.644 10:04:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:39.644 10:04:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.644 10:04:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.644 10:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 START TEST locking_app_on_locked_coremask 00:05:39.645 ************************************ 00:05:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59386 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 10:04:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.645 [2024-12-06 10:04:45.713531] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:39.645 [2024-12-06 10:04:45.713635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:05:39.902 [2024-12-06 10:04:45.862941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.902 [2024-12-06 10:04:45.946821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59396 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59396 /var/tmp/spdk2.sock 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59396 /var/tmp/spdk2.sock 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59396 /var/tmp/spdk2.sock 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.504 10:04:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.504 [2024-12-06 10:04:46.636971] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:40.504 [2024-12-06 10:04:46.637252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:05:40.762 [2024-12-06 10:04:46.800807] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59386 has claimed it. 00:05:40.762 [2024-12-06 10:04:46.800876] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:41.328 ERROR: process (pid: 59396) is no longer running 00:05:41.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59396) - No such process 00:05:41.328 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.328 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59386 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59386 00:05:41.329 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59386 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59386 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:05:41.586 killing process with pid 59386 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59386 00:05:41.586 10:04:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59386 00:05:42.966 00:05:42.966 real 0m3.111s 00:05:42.966 user 0m3.445s 00:05:42.966 sys 0m0.497s 00:05:42.966 10:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.966 10:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.966 ************************************ 00:05:42.966 END TEST locking_app_on_locked_coremask 00:05:42.966 ************************************ 00:05:42.966 10:04:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:42.966 10:04:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.966 10:04:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.966 10:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.966 ************************************ 00:05:42.966 START TEST locking_overlapped_coremask 00:05:42.966 ************************************ 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59455 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59455 /var/tmp/spdk.sock 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59455 ']' 00:05:42.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.966 10:04:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:42.966 [2024-12-06 10:04:48.887806] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:42.966 [2024-12-06 10:04:48.887902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59455 ] 00:05:42.966 [2024-12-06 10:04:49.038709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.966 [2024-12-06 10:04:49.126023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.966 [2024-12-06 10:04:49.126189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.966 [2024-12-06 10:04:49.126329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59473 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59473 /var/tmp/spdk2.sock 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59473 /var/tmp/spdk2.sock 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59473 /var/tmp/spdk2.sock 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59473 ']' 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.900 10:04:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.900 [2024-12-06 10:04:49.809493] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:43.900 [2024-12-06 10:04:49.809624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59473 ] 00:05:43.900 [2024-12-06 10:04:49.993288] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59455 has claimed it. 00:05:43.900 [2024-12-06 10:04:49.993368] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:44.466 ERROR: process (pid: 59473) is no longer running 00:05:44.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59473) - No such process 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59455 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59455 ']' 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59455 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59455 00:05:44.466 killing process with pid 59455 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59455' 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59455 00:05:44.466 10:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59455 00:05:45.839 00:05:45.839 real 0m2.853s 00:05:45.839 user 0m7.804s 00:05:45.839 sys 0m0.417s 00:05:45.839 ************************************ 00:05:45.839 END TEST locking_overlapped_coremask 00:05:45.839 ************************************ 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.839 10:04:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:45.839 10:04:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.839 10:04:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.839 10:04:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.839 ************************************ 00:05:45.839 START TEST locking_overlapped_coremask_via_rpc 00:05:45.839 ************************************ 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59521 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59521 /var/tmp/spdk.sock 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59521 ']' 00:05:45.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.839 10:04:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.839 [2024-12-06 10:04:51.805131] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:45.839 [2024-12-06 10:04:51.805251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59521 ] 00:05:45.839 [2024-12-06 10:04:51.962900] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.839 [2024-12-06 10:04:51.963092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.096 [2024-12-06 10:04:52.050846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.096 [2024-12-06 10:04:52.051159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.096 [2024-12-06 10:04:52.051199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59538 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59538 /var/tmp/spdk2.sock 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59538 ']' 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.661 10:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.661 [2024-12-06 10:04:52.724554] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:46.661 [2024-12-06 10:04:52.724670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59538 ] 00:05:46.919 [2024-12-06 10:04:52.902981] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.919 [2024-12-06 10:04:52.903046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.176 [2024-12-06 10:04:53.111885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.176 [2024-12-06 10:04:53.115509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.176 [2024-12-06 10:04:53.115517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.547 [2024-12-06 10:04:54.301623] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59521 has claimed it. 00:05:48.547 request: 00:05:48.547 { 00:05:48.547 "method": "framework_enable_cpumask_locks", 00:05:48.547 "req_id": 1 00:05:48.547 } 00:05:48.547 Got JSON-RPC error response 00:05:48.547 response: 00:05:48.547 { 00:05:48.547 "code": -32603, 00:05:48.547 "message": "Failed to claim CPU core: 2" 00:05:48.547 } 00:05:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59521 /var/tmp/spdk.sock 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59521 ']' 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.547 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59538 /var/tmp/spdk2.sock 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59538 ']' 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.548 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.805 ************************************ 00:05:48.805 END TEST locking_overlapped_coremask_via_rpc 00:05:48.805 ************************************ 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:48.805 00:05:48.805 real 0m3.018s 00:05:48.805 user 0m1.166s 00:05:48.805 sys 0m0.119s 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.805 10:04:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.805 10:04:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:48.805 10:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59521 ]] 00:05:48.805 10:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59521 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59521 ']' 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59521 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59521 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.805 killing process with pid 59521 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59521' 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59521 00:05:48.805 10:04:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59521 00:05:50.174 10:04:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59538 ]] 00:05:50.174 10:04:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59538 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59538 ']' 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59538 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59538 00:05:50.174 killing process with pid 59538 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59538' 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59538 00:05:50.174 10:04:56 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59538 00:05:51.546 10:04:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.546 Process with pid 59521 is not found 00:05:51.546 Process with pid 59538 is not found 00:05:51.546 10:04:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:51.546 10:04:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59521 ]] 00:05:51.546 10:04:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59521 00:05:51.546 10:04:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59521 ']' 00:05:51.546 10:04:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59521 00:05:51.546 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59521) - No such process 00:05:51.546 10:04:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59521 is not found' 00:05:51.547 10:04:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59538 ]] 00:05:51.547 10:04:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59538 00:05:51.547 10:04:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59538 ']' 00:05:51.547 10:04:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59538 00:05:51.547 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59538) - No such process 00:05:51.547 10:04:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59538 is not found' 00:05:51.547 10:04:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:51.547 00:05:51.547 real 0m30.135s 00:05:51.547 user 0m51.940s 00:05:51.547 sys 0m4.477s 00:05:51.547 ************************************ 00:05:51.547 END TEST cpu_locks 00:05:51.547 ************************************ 00:05:51.547 10:04:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.547 10:04:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 ************************************ 00:05:51.547 END TEST event 00:05:51.547 ************************************ 00:05:51.547 00:05:51.547 real 0m58.123s 00:05:51.547 user 1m47.408s 00:05:51.547 sys 0m7.454s 00:05:51.547 10:04:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.547 10:04:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 10:04:57 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.547 10:04:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.547 10:04:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.547 10:04:57 -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 ************************************ 00:05:51.547 START TEST thread 00:05:51.547 ************************************ 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:51.547 * Looking for test storage... 00:05:51.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:51.547 10:04:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.547 10:04:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.547 10:04:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.547 10:04:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.547 10:04:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.547 10:04:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.547 10:04:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.547 10:04:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.547 10:04:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.547 10:04:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.547 10:04:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.547 10:04:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:51.547 10:04:57 thread -- scripts/common.sh@345 -- # : 1 00:05:51.547 10:04:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.547 10:04:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.547 10:04:57 thread -- scripts/common.sh@365 -- # decimal 1 00:05:51.547 10:04:57 thread -- scripts/common.sh@353 -- # local d=1 00:05:51.547 10:04:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.547 10:04:57 thread -- scripts/common.sh@355 -- # echo 1 00:05:51.547 10:04:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.547 10:04:57 thread -- scripts/common.sh@366 -- # decimal 2 00:05:51.547 10:04:57 thread -- scripts/common.sh@353 -- # local d=2 00:05:51.547 10:04:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.547 10:04:57 thread -- scripts/common.sh@355 -- # echo 2 00:05:51.547 10:04:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.547 10:04:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.547 10:04:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.547 10:04:57 thread -- scripts/common.sh@368 -- # return 0 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 10:04:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.547 10:04:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 ************************************ 00:05:51.547 START TEST thread_poller_perf 00:05:51.547 ************************************ 00:05:51.547 10:04:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:51.547 [2024-12-06 10:04:57.523398] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:51.547 [2024-12-06 10:04:57.523651] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59693 ] 00:05:51.547 [2024-12-06 10:04:57.678905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.805 [2024-12-06 10:04:57.781086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.805 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:53.178 [2024-12-06T10:04:59.345Z] ====================================== 00:05:53.178 [2024-12-06T10:04:59.345Z] busy:2612636718 (cyc) 00:05:53.178 [2024-12-06T10:04:59.345Z] total_run_count: 303000 00:05:53.178 [2024-12-06T10:04:59.345Z] tsc_hz: 2600000000 (cyc) 00:05:53.178 [2024-12-06T10:04:59.345Z] ====================================== 00:05:53.178 [2024-12-06T10:04:59.345Z] poller_cost: 8622 (cyc), 3316 (nsec) 00:05:53.178 00:05:53.178 real 0m1.449s 00:05:53.178 user 0m1.279s 00:05:53.178 sys 0m0.061s 00:05:53.178 10:04:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.178 10:04:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.178 ************************************ 00:05:53.178 END TEST thread_poller_perf 00:05:53.178 ************************************ 00:05:53.178 10:04:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.178 10:04:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:53.178 10:04:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.178 10:04:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.178 ************************************ 00:05:53.178 START TEST thread_poller_perf 00:05:53.178 ************************************ 00:05:53.178 10:04:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.178 [2024-12-06 10:04:59.015877] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:53.178 [2024-12-06 10:04:59.016090] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:05:53.178 [2024-12-06 10:04:59.177117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.178 [2024-12-06 10:04:59.274886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.178 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.551 [2024-12-06T10:05:00.718Z] ====================================== 00:05:54.551 [2024-12-06T10:05:00.718Z] busy:2603202006 (cyc) 00:05:54.551 [2024-12-06T10:05:00.718Z] total_run_count: 3643000 00:05:54.551 [2024-12-06T10:05:00.718Z] tsc_hz: 2600000000 (cyc) 00:05:54.551 [2024-12-06T10:05:00.718Z] ====================================== 00:05:54.551 [2024-12-06T10:05:00.718Z] poller_cost: 714 (cyc), 274 (nsec) 00:05:54.551 00:05:54.551 real 0m1.451s 00:05:54.551 user 0m1.263s 00:05:54.551 sys 0m0.079s 00:05:54.551 10:05:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.551 10:05:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.551 ************************************ 00:05:54.551 END TEST thread_poller_perf 00:05:54.551 ************************************ 00:05:54.551 10:05:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:54.551 00:05:54.551 real 0m3.112s 00:05:54.551 user 0m2.654s 00:05:54.551 sys 0m0.243s 00:05:54.551 ************************************ 00:05:54.551 END TEST thread 00:05:54.551 ************************************ 00:05:54.551 10:05:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.551 10:05:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.551 10:05:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:54.551 10:05:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:54.551 10:05:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.551 10:05:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.551 10:05:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.551 ************************************ 00:05:54.551 START TEST app_cmdline 00:05:54.551 ************************************ 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:54.551 * Looking for test storage... 00:05:54.551 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:54.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.551 10:05:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.551 --rc genhtml_branch_coverage=1 00:05:54.551 --rc genhtml_function_coverage=1 00:05:54.551 --rc genhtml_legend=1 00:05:54.551 --rc geninfo_all_blocks=1 00:05:54.551 --rc geninfo_unexecuted_blocks=1 00:05:54.551 00:05:54.551 ' 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.551 --rc genhtml_branch_coverage=1 00:05:54.551 --rc genhtml_function_coverage=1 00:05:54.551 --rc genhtml_legend=1 00:05:54.551 --rc geninfo_all_blocks=1 00:05:54.551 --rc geninfo_unexecuted_blocks=1 00:05:54.551 00:05:54.551 ' 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.551 --rc genhtml_branch_coverage=1 00:05:54.551 --rc genhtml_function_coverage=1 00:05:54.551 --rc genhtml_legend=1 00:05:54.551 --rc geninfo_all_blocks=1 00:05:54.551 --rc geninfo_unexecuted_blocks=1 00:05:54.551 00:05:54.551 ' 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.551 --rc genhtml_branch_coverage=1 00:05:54.551 --rc genhtml_function_coverage=1 00:05:54.551 --rc genhtml_legend=1 00:05:54.551 --rc geninfo_all_blocks=1 00:05:54.551 --rc geninfo_unexecuted_blocks=1 00:05:54.551 00:05:54.551 ' 00:05:54.551 10:05:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:54.551 10:05:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59813 00:05:54.551 10:05:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59813 00:05:54.551 10:05:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59813 ']' 00:05:54.552 10:05:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.552 10:05:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.552 10:05:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.552 10:05:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.552 10:05:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.552 10:05:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:54.809 [2024-12-06 10:05:00.716957] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:54.809 [2024-12-06 10:05:00.717078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:05:54.809 [2024-12-06 10:05:00.876067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.809 [2024-12-06 10:05:00.974057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:55.740 { 00:05:55.740 "version": "SPDK v25.01-pre git sha1 500d76084", 00:05:55.740 "fields": { 00:05:55.740 "major": 25, 00:05:55.740 "minor": 1, 00:05:55.740 "patch": 0, 00:05:55.740 "suffix": "-pre", 00:05:55.740 "commit": "500d76084" 00:05:55.740 } 00:05:55.740 } 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:55.740 10:05:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:55.740 10:05:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.998 request: 00:05:55.998 { 00:05:55.998 "method": "env_dpdk_get_mem_stats", 00:05:55.998 "req_id": 1 00:05:55.998 } 00:05:55.998 Got JSON-RPC error response 00:05:55.998 response: 00:05:55.998 { 00:05:55.998 "code": -32601, 00:05:55.998 "message": "Method not found" 00:05:55.998 } 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.998 10:05:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59813 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59813 ']' 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59813 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59813 00:05:55.998 killing process with pid 59813 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59813' 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 59813 00:05:55.998 10:05:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 59813 00:05:57.370 ************************************ 00:05:57.370 END TEST app_cmdline 00:05:57.370 ************************************ 00:05:57.370 00:05:57.370 real 0m3.002s 00:05:57.370 user 0m3.386s 00:05:57.370 sys 0m0.413s 00:05:57.370 10:05:03 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.370 10:05:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.629 10:05:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:57.629 10:05:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.629 10:05:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.629 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.629 ************************************ 00:05:57.629 START TEST version 00:05:57.629 ************************************ 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:57.629 * Looking for test storage... 00:05:57.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.629 10:05:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.629 10:05:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.629 10:05:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.629 10:05:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.629 10:05:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.629 10:05:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.629 10:05:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.629 10:05:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.629 10:05:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.629 10:05:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.629 10:05:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.629 10:05:03 version -- scripts/common.sh@344 -- # case "$op" in 00:05:57.629 10:05:03 version -- scripts/common.sh@345 -- # : 1 00:05:57.629 10:05:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.629 10:05:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.629 10:05:03 version -- scripts/common.sh@365 -- # decimal 1 00:05:57.629 10:05:03 version -- scripts/common.sh@353 -- # local d=1 00:05:57.629 10:05:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.629 10:05:03 version -- scripts/common.sh@355 -- # echo 1 00:05:57.629 10:05:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.629 10:05:03 version -- scripts/common.sh@366 -- # decimal 2 00:05:57.629 10:05:03 version -- scripts/common.sh@353 -- # local d=2 00:05:57.629 10:05:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.629 10:05:03 version -- scripts/common.sh@355 -- # echo 2 00:05:57.629 10:05:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.629 10:05:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.629 10:05:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.629 10:05:03 version -- scripts/common.sh@368 -- # return 0 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.629 --rc genhtml_branch_coverage=1 00:05:57.629 --rc genhtml_function_coverage=1 00:05:57.629 --rc genhtml_legend=1 00:05:57.629 --rc geninfo_all_blocks=1 00:05:57.629 --rc geninfo_unexecuted_blocks=1 00:05:57.629 00:05:57.629 ' 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.629 --rc genhtml_branch_coverage=1 00:05:57.629 --rc genhtml_function_coverage=1 00:05:57.629 --rc genhtml_legend=1 00:05:57.629 --rc geninfo_all_blocks=1 00:05:57.629 --rc geninfo_unexecuted_blocks=1 00:05:57.629 00:05:57.629 ' 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.629 --rc genhtml_branch_coverage=1 00:05:57.629 --rc genhtml_function_coverage=1 00:05:57.629 --rc genhtml_legend=1 00:05:57.629 --rc geninfo_all_blocks=1 00:05:57.629 --rc geninfo_unexecuted_blocks=1 00:05:57.629 00:05:57.629 ' 00:05:57.629 10:05:03 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.629 --rc genhtml_branch_coverage=1 00:05:57.629 --rc genhtml_function_coverage=1 00:05:57.629 --rc genhtml_legend=1 00:05:57.629 --rc geninfo_all_blocks=1 00:05:57.629 --rc geninfo_unexecuted_blocks=1 00:05:57.629 00:05:57.629 ' 00:05:57.629 10:05:03 version -- app/version.sh@17 -- # get_header_version major 00:05:57.629 10:05:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:57.629 10:05:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.629 10:05:03 version -- app/version.sh@14 -- # cut -f2 00:05:57.629 10:05:03 version -- app/version.sh@17 -- # major=25 00:05:57.629 10:05:03 version -- app/version.sh@18 -- # get_header_version minor 00:05:57.629 10:05:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:57.629 10:05:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.629 10:05:03 version -- app/version.sh@14 -- # cut -f2 00:05:57.629 10:05:03 version -- app/version.sh@18 -- # minor=1 00:05:57.630 10:05:03 version -- app/version.sh@19 -- # get_header_version patch 00:05:57.630 10:05:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:57.630 10:05:03 version -- app/version.sh@14 -- # cut -f2 00:05:57.630 10:05:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.630 10:05:03 version -- app/version.sh@19 -- # patch=0 00:05:57.630 10:05:03 version -- app/version.sh@20 -- # get_header_version suffix 00:05:57.630 10:05:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:57.630 10:05:03 version -- app/version.sh@14 -- # cut -f2 00:05:57.630 10:05:03 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.630 10:05:03 version -- app/version.sh@20 -- # suffix=-pre 00:05:57.630 10:05:03 version -- app/version.sh@22 -- # version=25.1 00:05:57.630 10:05:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:57.630 10:05:03 version -- app/version.sh@28 -- # version=25.1rc0 00:05:57.630 10:05:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:57.630 10:05:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:57.630 10:05:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:57.630 10:05:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:57.630 00:05:57.630 real 0m0.196s 00:05:57.630 user 0m0.127s 00:05:57.630 sys 0m0.098s 00:05:57.630 ************************************ 00:05:57.630 END TEST version 00:05:57.630 ************************************ 00:05:57.630 10:05:03 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.630 10:05:03 version -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 10:05:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:57.630 10:05:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:57.630 10:05:03 -- spdk/autotest.sh@194 -- # uname -s 00:05:57.630 10:05:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:57.630 10:05:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:57.630 10:05:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:57.630 10:05:03 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:57.630 10:05:03 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:57.630 10:05:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:57.630 10:05:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.630 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.630 ************************************ 00:05:57.630 START TEST blockdev_nvme 00:05:57.630 ************************************ 00:05:57.630 10:05:03 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:57.889 * Looking for test storage... 00:05:57.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.889 10:05:03 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:57.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.889 --rc genhtml_branch_coverage=1 00:05:57.889 --rc genhtml_function_coverage=1 00:05:57.889 --rc genhtml_legend=1 00:05:57.889 --rc geninfo_all_blocks=1 00:05:57.889 --rc geninfo_unexecuted_blocks=1 00:05:57.889 00:05:57.889 ' 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:57.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.889 --rc genhtml_branch_coverage=1 00:05:57.889 --rc genhtml_function_coverage=1 00:05:57.889 --rc genhtml_legend=1 00:05:57.889 --rc geninfo_all_blocks=1 00:05:57.889 --rc geninfo_unexecuted_blocks=1 00:05:57.889 00:05:57.889 ' 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:57.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.889 --rc genhtml_branch_coverage=1 00:05:57.889 --rc genhtml_function_coverage=1 00:05:57.889 --rc genhtml_legend=1 00:05:57.889 --rc geninfo_all_blocks=1 00:05:57.889 --rc geninfo_unexecuted_blocks=1 00:05:57.889 00:05:57.889 ' 00:05:57.889 10:05:03 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:57.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.889 --rc genhtml_branch_coverage=1 00:05:57.889 --rc genhtml_function_coverage=1 00:05:57.889 --rc genhtml_legend=1 00:05:57.889 --rc geninfo_all_blocks=1 00:05:57.889 --rc geninfo_unexecuted_blocks=1 00:05:57.889 00:05:57.889 ' 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.889 10:05:03 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:57.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59990 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:57.889 10:05:03 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59990 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59990 ']' 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.890 10:05:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:57.890 10:05:03 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:57.890 [2024-12-06 10:05:04.010384] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:05:57.890 [2024-12-06 10:05:04.010519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59990 ] 00:05:58.149 [2024-12-06 10:05:04.171068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.149 [2024-12-06 10:05:04.272485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.714 10:05:04 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.714 10:05:04 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:58.714 10:05:04 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:58.714 10:05:04 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:58.714 10:05:04 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:58.714 10:05:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:58.714 10:05:04 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.072 10:05:04 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:59.072 10:05:04 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.072 10:05:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.072 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.072 10:05:05 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:59.072 10:05:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.072 10:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6011c298-e6d2-4d2c-acbb-4fc3e24b82b1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6011c298-e6d2-4d2c-acbb-4fc3e24b82b1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "98712a8b-46d0-4f82-bb5c-13ee71ab600b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "98712a8b-46d0-4f82-bb5c-13ee71ab600b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aa008d35-8714-4620-bc10-718d2f0f938e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa008d35-8714-4620-bc10-718d2f0f938e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "359e027e-6a1f-4a3e-a6dc-556e96e44528"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "359e027e-6a1f-4a3e-a6dc-556e96e44528",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8f75aceb-4a72-42cb-b7d8-b97f058923f0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8f75aceb-4a72-42cb-b7d8-b97f058923f0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "22321b80-c47d-46d9-963e-07356193e1cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "22321b80-c47d-46d9-963e-07356193e1cf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:59.330 10:05:05 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59990 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59990 ']' 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59990 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.330 10:05:05 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59990 00:05:59.330 killing process with pid 59990 00:05:59.331 10:05:05 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.331 10:05:05 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.331 10:05:05 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59990' 00:05:59.331 10:05:05 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59990 00:05:59.331 10:05:05 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59990 00:06:01.226 10:05:06 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:01.226 10:05:06 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:01.226 10:05:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:01.226 10:05:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.226 10:05:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:01.226 ************************************ 00:06:01.226 START TEST bdev_hello_world 00:06:01.226 ************************************ 00:06:01.226 10:05:06 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:01.226 [2024-12-06 10:05:06.956672] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:01.226 [2024-12-06 10:05:06.956807] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60069 ] 00:06:01.226 [2024-12-06 10:05:07.118251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.226 [2024-12-06 10:05:07.221316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.794 [2024-12-06 10:05:07.762041] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:01.794 [2024-12-06 10:05:07.762093] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:01.794 [2024-12-06 10:05:07.762112] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:01.794 [2024-12-06 10:05:07.764550] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:01.794 [2024-12-06 10:05:07.765468] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:01.794 [2024-12-06 10:05:07.765496] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:01.794 [2024-12-06 10:05:07.765997] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:01.794 00:06:01.794 [2024-12-06 10:05:07.766019] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:02.361 ************************************ 00:06:02.361 END TEST bdev_hello_world 00:06:02.361 ************************************ 00:06:02.361 00:06:02.361 real 0m1.601s 00:06:02.361 user 0m1.309s 00:06:02.361 sys 0m0.185s 00:06:02.361 10:05:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.361 10:05:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:02.620 10:05:08 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:02.620 10:05:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:02.620 10:05:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.620 10:05:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:02.620 ************************************ 00:06:02.620 START TEST bdev_bounds 00:06:02.620 ************************************ 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60111 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.620 Process bdevio pid: 60111 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60111' 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60111 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60111 ']' 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.620 10:05:08 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:02.620 [2024-12-06 10:05:08.605097] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:02.620 [2024-12-06 10:05:08.605221] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60111 ] 00:06:02.620 [2024-12-06 10:05:08.767093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.878 [2024-12-06 10:05:08.870113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.878 [2024-12-06 10:05:08.870374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.878 [2024-12-06 10:05:08.870509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.512 10:05:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.512 10:05:09 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:03.512 10:05:09 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:03.512 I/O targets: 00:06:03.512 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:03.512 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:03.512 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:03.512 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:03.512 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:03.512 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:03.512 00:06:03.512 00:06:03.512 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.512 http://cunit.sourceforge.net/ 00:06:03.512 00:06:03.512 00:06:03.512 Suite: bdevio tests on: Nvme3n1 00:06:03.512 Test: blockdev write read block ...passed 00:06:03.512 Test: blockdev write zeroes read block ...passed 00:06:03.512 Test: blockdev write zeroes read no split ...passed 00:06:03.512 Test: blockdev write zeroes read split ...passed 00:06:03.512 Test: blockdev write zeroes read split partial ...passed 00:06:03.512 Test: blockdev reset ...[2024-12-06 10:05:09.575433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:03.512 [2024-12-06 10:05:09.580177] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:03.512 passed 00:06:03.512 Test: blockdev write read 8 blocks ...passed 00:06:03.512 Test: blockdev write read size > 128k ...passed 00:06:03.512 Test: blockdev write read invalid size ...passed 00:06:03.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:03.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:03.512 Test: blockdev write read max offset ...passed 00:06:03.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:03.512 Test: blockdev writev readv 8 blocks ...passed 00:06:03.512 Test: blockdev writev readv 30 x 1block ...passed 00:06:03.512 Test: blockdev writev readv block ...passed 00:06:03.512 Test: blockdev writev readv size > 128k ...passed 00:06:03.512 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:03.512 Test: blockdev comparev and writev ...[2024-12-06 10:05:09.599628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2caa0a000 len:0x1000 00:06:03.512 [2024-12-06 10:05:09.599675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:03.512 passed 00:06:03.512 Test: blockdev nvme passthru rw ...passed 00:06:03.512 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:09.602248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:03.512 [2024-12-06 10:05:09.602281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:03.512 passed 00:06:03.512 Test: blockdev nvme admin passthru ...passed 00:06:03.512 Test: blockdev copy ...passed 00:06:03.512 Suite: bdevio tests on: Nvme2n3 00:06:03.512 Test: blockdev write read block ...passed 00:06:03.512 Test: blockdev write zeroes read block ...passed 00:06:03.512 Test: blockdev write zeroes read no split ...passed 00:06:03.512 Test: blockdev write zeroes read split ...passed 00:06:03.512 Test: blockdev write zeroes read split partial ...passed 00:06:03.512 Test: blockdev reset ...[2024-12-06 10:05:09.656486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:03.512 [2024-12-06 10:05:09.660005] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:03.512 passed 00:06:03.512 Test: blockdev write read 8 blocks ...passed 00:06:03.512 Test: blockdev write read size > 128k ...passed 00:06:03.512 Test: blockdev write read invalid size ...passed 00:06:03.512 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:03.512 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:03.512 Test: blockdev write read max offset ...passed 00:06:03.512 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:03.512 Test: blockdev writev readv 8 blocks ...passed 00:06:03.512 Test: blockdev writev readv 30 x 1block ...passed 00:06:03.512 Test: blockdev writev readv block ...passed 00:06:03.512 Test: blockdev writev readv size > 128k ...passed 00:06:03.771 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:03.771 Test: blockdev comparev and writev ...[2024-12-06 10:05:09.680056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a2606000 len:0x1000 00:06:03.771 [2024-12-06 10:05:09.680107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:03.771 passed 00:06:03.771 Test: blockdev nvme passthru rw ...passed 00:06:03.771 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:09.682471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:03.771 [2024-12-06 10:05:09.682503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:03.771 passed 00:06:03.771 Test: blockdev nvme admin passthru ...passed 00:06:03.771 Test: blockdev copy ...passed 00:06:03.771 Suite: bdevio tests on: Nvme2n2 00:06:03.771 Test: blockdev write read block ...passed 00:06:03.771 Test: blockdev write zeroes read block ...passed 00:06:03.771 Test: blockdev write zeroes read no split ...passed 00:06:03.771 Test: blockdev write zeroes read split ...passed 00:06:03.771 Test: blockdev write zeroes read split partial ...passed 00:06:03.771 Test: blockdev reset ...[2024-12-06 10:05:09.734563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:03.771 [2024-12-06 10:05:09.738206] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:03.771 Test: blockdev write read 8 blocks ...uccessful. 00:06:03.771 passed 00:06:03.771 Test: blockdev write read size > 128k ...passed 00:06:03.772 Test: blockdev write read invalid size ...passed 00:06:03.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:03.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:03.772 Test: blockdev write read max offset ...passed 00:06:03.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:03.772 Test: blockdev writev readv 8 blocks ...passed 00:06:03.772 Test: blockdev writev readv 30 x 1block ...passed 00:06:03.772 Test: blockdev writev readv block ...passed 00:06:03.772 Test: blockdev writev readv size > 128k ...passed 00:06:03.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:03.772 Test: blockdev comparev and writev ...[2024-12-06 10:05:09.748562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d303c000 len:0x1000 00:06:03.772 [2024-12-06 10:05:09.748610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:03.772 passed 00:06:03.772 Test: blockdev nvme passthru rw ...passed 00:06:03.772 Test: blockdev nvme passthru vendor specific ...passed 00:06:03.772 Test: blockdev nvme admin passthru ...[2024-12-06 10:05:09.749303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:03.772 [2024-12-06 10:05:09.749348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:03.772 passed 00:06:03.772 Test: blockdev copy ...passed 00:06:03.772 Suite: bdevio tests on: Nvme2n1 00:06:03.772 Test: blockdev write read block ...passed 00:06:03.772 Test: blockdev write zeroes read block ...passed 00:06:03.772 Test: blockdev write zeroes read no split ...passed 00:06:03.772 Test: blockdev write zeroes read split ...passed 00:06:03.772 Test: blockdev write zeroes read split partial ...passed 00:06:03.772 Test: blockdev reset ...[2024-12-06 10:05:09.799578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:03.772 [2024-12-06 10:05:09.803688] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:03.772 Test: blockdev write read 8 blocks ...uccessful. 00:06:03.772 passed 00:06:03.772 Test: blockdev write read size > 128k ...passed 00:06:03.772 Test: blockdev write read invalid size ...passed 00:06:03.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:03.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:03.772 Test: blockdev write read max offset ...passed 00:06:03.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:03.772 Test: blockdev writev readv 8 blocks ...passed 00:06:03.772 Test: blockdev writev readv 30 x 1block ...passed 00:06:03.772 Test: blockdev writev readv block ...passed 00:06:03.772 Test: blockdev writev readv size > 128k ...passed 00:06:03.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:03.772 Test: blockdev comparev and writev ...[2024-12-06 10:05:09.821824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3038000 len:0x1000 00:06:03.772 [2024-12-06 10:05:09.821954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:03.772 passed 00:06:03.772 Test: blockdev nvme passthru rw ...passed 00:06:03.772 Test: blockdev nvme passthru vendor specific ...passed 00:06:03.772 Test: blockdev nvme admin passthru ...[2024-12-06 10:05:09.823744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:03.772 [2024-12-06 10:05:09.823778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:03.772 passed 00:06:03.772 Test: blockdev copy ...passed 00:06:03.772 Suite: bdevio tests on: Nvme1n1 00:06:03.772 Test: blockdev write read block ...passed 00:06:03.772 Test: blockdev write zeroes read block ...passed 00:06:03.772 Test: blockdev write zeroes read no split ...passed 00:06:03.772 Test: blockdev write zeroes read split ...passed 00:06:03.772 Test: blockdev write zeroes read split partial ...passed 00:06:03.772 Test: blockdev reset ...[2024-12-06 10:05:09.903850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:03.772 [2024-12-06 10:05:09.907717] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:03.772 Test: blockdev write read 8 blocks ...uccessful. 00:06:03.772 passed 00:06:03.772 Test: blockdev write read size > 128k ...passed 00:06:03.772 Test: blockdev write read invalid size ...passed 00:06:03.772 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:03.772 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:03.772 Test: blockdev write read max offset ...passed 00:06:03.772 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:03.772 Test: blockdev writev readv 8 blocks ...passed 00:06:03.772 Test: blockdev writev readv 30 x 1block ...passed 00:06:03.772 Test: blockdev writev readv block ...passed 00:06:03.772 Test: blockdev writev readv size > 128k ...passed 00:06:03.772 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:03.772 Test: blockdev comparev and writev ...[2024-12-06 10:05:09.927901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3034000 len:0x1000 00:06:03.772 [2024-12-06 10:05:09.927945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:03.772 passed 00:06:03.772 Test: blockdev nvme passthru rw ...passed 00:06:03.772 Test: blockdev nvme passthru vendor specific ...passed 00:06:03.772 Test: blockdev nvme admin passthru ...[2024-12-06 10:05:09.929604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:03.772 [2024-12-06 10:05:09.929637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:03.772 passed 00:06:04.031 Test: blockdev copy ...passed 00:06:04.031 Suite: bdevio tests on: Nvme0n1 00:06:04.031 Test: blockdev write read block ...passed 00:06:04.031 Test: blockdev write zeroes read block ...passed 00:06:04.031 Test: blockdev write zeroes read no split ...passed 00:06:04.031 Test: blockdev write zeroes read split ...passed 00:06:04.031 Test: blockdev write zeroes read split partial ...passed 00:06:04.031 Test: blockdev reset ...[2024-12-06 10:05:10.055480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:04.031 [2024-12-06 10:05:10.059157] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:04.031 Test: blockdev write read 8 blocks ...uccessful. 00:06:04.031 passed 00:06:04.031 Test: blockdev write read size > 128k ...passed 00:06:04.031 Test: blockdev write read invalid size ...passed 00:06:04.031 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:04.031 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:04.031 Test: blockdev write read max offset ...passed 00:06:04.031 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:04.031 Test: blockdev writev readv 8 blocks ...passed 00:06:04.031 Test: blockdev writev readv 30 x 1block ...passed 00:06:04.031 Test: blockdev writev readv block ...passed 00:06:04.031 Test: blockdev writev readv size > 128k ...passed 00:06:04.031 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:04.031 Test: blockdev comparev and writev ...[2024-12-06 10:05:10.069665] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 spassed 00:06:04.031 Test: blockdev nvme passthru rw ...ince it has 00:06:04.031 separate metadata which is not supported yet. 00:06:04.031 passed 00:06:04.031 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:10.070881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:04.031 [2024-12-06 10:05:10.071037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:04.031 passed 00:06:04.031 Test: blockdev nvme admin passthru ...passed 00:06:04.031 Test: blockdev copy ...passed 00:06:04.031 00:06:04.031 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.031 suites 6 6 n/a 0 0 00:06:04.031 tests 138 138 138 0 0 00:06:04.031 asserts 893 893 893 0 n/a 00:06:04.031 00:06:04.031 Elapsed time = 1.352 seconds 00:06:04.031 0 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60111 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60111 ']' 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60111 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60111 00:06:04.031 killing process with pid 60111 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60111' 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60111 00:06:04.031 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60111 00:06:04.967 10:05:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:04.967 00:06:04.967 real 0m2.245s 00:06:04.967 user 0m5.611s 00:06:04.967 sys 0m0.275s 00:06:04.967 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.967 ************************************ 00:06:04.967 END TEST bdev_bounds 00:06:04.967 ************************************ 00:06:04.967 10:05:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 10:05:10 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:04.967 10:05:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:04.967 10:05:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.967 10:05:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 ************************************ 00:06:04.967 START TEST bdev_nbd 00:06:04.967 ************************************ 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60165 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60165 /var/tmp/spdk-nbd.sock 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60165 ']' 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:04.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.967 10:05:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:04.967 [2024-12-06 10:05:10.920047] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:04.967 [2024-12-06 10:05:10.920156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.967 [2024-12-06 10:05:11.082558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.225 [2024-12-06 10:05:11.185826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:05.791 10:05:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.048 1+0 records in 00:06:06.048 1+0 records out 00:06:06.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134467 s, 3.0 MB/s 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.048 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.307 1+0 records in 00:06:06.307 1+0 records out 00:06:06.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106392 s, 3.8 MB/s 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.307 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:06.564 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:06.564 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:06.564 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:06.564 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:06.564 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.565 1+0 records in 00:06:06.565 1+0 records out 00:06:06.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127253 s, 3.2 MB/s 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.565 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:06.823 1+0 records in 00:06:06.823 1+0 records out 00:06:06.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440746 s, 9.3 MB/s 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:06.823 10:05:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.081 1+0 records in 00:06:07.081 1+0 records out 00:06:07.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00152517 s, 2.7 MB/s 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:07.081 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:07.339 1+0 records in 00:06:07.339 1+0 records out 00:06:07.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00297408 s, 1.4 MB/s 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:07.339 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd0", 00:06:07.597 "bdev_name": "Nvme0n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd1", 00:06:07.597 "bdev_name": "Nvme1n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd2", 00:06:07.597 "bdev_name": "Nvme2n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd3", 00:06:07.597 "bdev_name": "Nvme2n2" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd4", 00:06:07.597 "bdev_name": "Nvme2n3" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd5", 00:06:07.597 "bdev_name": "Nvme3n1" 00:06:07.597 } 00:06:07.597 ]' 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd0", 00:06:07.597 "bdev_name": "Nvme0n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd1", 00:06:07.597 "bdev_name": "Nvme1n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd2", 00:06:07.597 "bdev_name": "Nvme2n1" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd3", 00:06:07.597 "bdev_name": "Nvme2n2" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd4", 00:06:07.597 "bdev_name": "Nvme2n3" 00:06:07.597 }, 00:06:07.597 { 00:06:07.597 "nbd_device": "/dev/nbd5", 00:06:07.597 "bdev_name": "Nvme3n1" 00:06:07.597 } 00:06:07.597 ]' 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.597 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.855 10:05:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.113 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.369 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.626 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.882 10:05:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.138 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.139 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:09.395 /dev/nbd0 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.395 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.396 1+0 records in 00:06:09.396 1+0 records out 00:06:09.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001276 s, 3.2 MB/s 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.396 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:09.396 /dev/nbd1 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.653 1+0 records in 00:06:09.653 1+0 records out 00:06:09.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111998 s, 3.7 MB/s 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:09.653 /dev/nbd10 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.653 1+0 records in 00:06:09.653 1+0 records out 00:06:09.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125382 s, 3.3 MB/s 00:06:09.653 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.910 10:05:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:09.910 /dev/nbd11 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:09.910 1+0 records in 00:06:09.910 1+0 records out 00:06:09.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00178615 s, 2.3 MB/s 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:09.910 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:10.167 /dev/nbd12 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.167 1+0 records in 00:06:10.167 1+0 records out 00:06:10.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117554 s, 3.5 MB/s 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:10.167 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:10.424 /dev/nbd13 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:10.424 1+0 records in 00:06:10.424 1+0 records out 00:06:10.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894942 s, 4.6 MB/s 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.424 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd0", 00:06:10.681 "bdev_name": "Nvme0n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd1", 00:06:10.681 "bdev_name": "Nvme1n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd10", 00:06:10.681 "bdev_name": "Nvme2n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd11", 00:06:10.681 "bdev_name": "Nvme2n2" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd12", 00:06:10.681 "bdev_name": "Nvme2n3" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd13", 00:06:10.681 "bdev_name": "Nvme3n1" 00:06:10.681 } 00:06:10.681 ]' 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd0", 00:06:10.681 "bdev_name": "Nvme0n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd1", 00:06:10.681 "bdev_name": "Nvme1n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd10", 00:06:10.681 "bdev_name": "Nvme2n1" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd11", 00:06:10.681 "bdev_name": "Nvme2n2" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd12", 00:06:10.681 "bdev_name": "Nvme2n3" 00:06:10.681 }, 00:06:10.681 { 00:06:10.681 "nbd_device": "/dev/nbd13", 00:06:10.681 "bdev_name": "Nvme3n1" 00:06:10.681 } 00:06:10.681 ]' 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.681 /dev/nbd1 00:06:10.681 /dev/nbd10 00:06:10.681 /dev/nbd11 00:06:10.681 /dev/nbd12 00:06:10.681 /dev/nbd13' 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.681 /dev/nbd1 00:06:10.681 /dev/nbd10 00:06:10.681 /dev/nbd11 00:06:10.681 /dev/nbd12 00:06:10.681 /dev/nbd13' 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:10.681 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:10.682 256+0 records in 00:06:10.682 256+0 records out 00:06:10.682 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420281 s, 249 MB/s 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.682 10:05:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.245 256+0 records in 00:06:11.245 256+0 records out 00:06:11.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.433742 s, 2.4 MB/s 00:06:11.245 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.245 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.245 256+0 records in 00:06:11.245 256+0 records out 00:06:11.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114541 s, 9.2 MB/s 00:06:11.245 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.245 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:11.568 256+0 records in 00:06:11.568 256+0 records out 00:06:11.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.19657 s, 5.3 MB/s 00:06:11.568 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.568 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:11.840 256+0 records in 00:06:11.840 256+0 records out 00:06:11.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195346 s, 5.4 MB/s 00:06:11.840 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.840 10:05:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:12.097 256+0 records in 00:06:12.097 256+0 records out 00:06:12.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.280399 s, 3.7 MB/s 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:12.097 256+0 records in 00:06:12.097 256+0 records out 00:06:12.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143317 s, 7.3 MB/s 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.097 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.354 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.612 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.871 10:05:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.128 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:13.386 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.387 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:13.644 10:05:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:13.903 malloc_lvol_verify 00:06:13.903 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:14.160 59859c9f-8656-488a-b3f5-565cbe5a11b8 00:06:14.160 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:14.419 d159603c-e688-4d74-8be6-b0cd4e3c9a19 00:06:14.419 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:14.677 /dev/nbd0 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:14.677 mke2fs 1.47.0 (5-Feb-2023) 00:06:14.677 Discarding device blocks: 0/4096 done 00:06:14.677 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:14.677 00:06:14.677 Allocating group tables: 0/1 done 00:06:14.677 Writing inode tables: 0/1 done 00:06:14.677 Creating journal (1024 blocks): done 00:06:14.677 Writing superblocks and filesystem accounting information: 0/1 done 00:06:14.677 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.677 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60165 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60165 ']' 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60165 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60165 00:06:14.935 killing process with pid 60165 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60165' 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60165 00:06:14.935 10:05:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60165 00:06:16.308 10:05:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:16.308 00:06:16.308 real 0m11.325s 00:06:16.308 user 0m15.137s 00:06:16.308 sys 0m3.542s 00:06:16.308 10:05:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.308 10:05:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:16.308 ************************************ 00:06:16.308 END TEST bdev_nbd 00:06:16.308 ************************************ 00:06:16.308 10:05:22 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:16.308 10:05:22 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:16.308 skipping fio tests on NVMe due to multi-ns failures. 00:06:16.308 10:05:22 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:16.308 10:05:22 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:16.308 10:05:22 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:16.308 10:05:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:16.308 10:05:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.308 10:05:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.308 ************************************ 00:06:16.308 START TEST bdev_verify 00:06:16.308 ************************************ 00:06:16.308 10:05:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:16.308 [2024-12-06 10:05:22.315107] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:16.308 [2024-12-06 10:05:22.315225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60556 ] 00:06:16.565 [2024-12-06 10:05:22.474586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.565 [2024-12-06 10:05:22.579380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.565 [2024-12-06 10:05:22.579553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.130 Running I/O for 5 seconds... 00:06:19.431 16256.00 IOPS, 63.50 MiB/s [2024-12-06T10:05:26.527Z] 16960.00 IOPS, 66.25 MiB/s [2024-12-06T10:05:27.458Z] 17877.33 IOPS, 69.83 MiB/s [2024-12-06T10:05:28.419Z] 17936.00 IOPS, 70.06 MiB/s [2024-12-06T10:05:28.419Z] 18099.20 IOPS, 70.70 MiB/s 00:06:22.252 Latency(us) 00:06:22.252 [2024-12-06T10:05:28.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.253 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0xbd0bd 00:06:22.253 Nvme0n1 : 5.07 1514.01 5.91 0.00 0.00 84319.90 13913.80 87515.77 00:06:22.253 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:22.253 Nvme0n1 : 5.06 1466.14 5.73 0.00 0.00 86973.89 13712.15 91145.45 00:06:22.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0xa0000 00:06:22.253 Nvme1n1 : 5.07 1513.56 5.91 0.00 0.00 84176.42 15930.29 79046.50 00:06:22.253 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0xa0000 length 0xa0000 00:06:22.253 Nvme1n1 : 5.07 1465.71 5.73 0.00 0.00 86807.41 16333.59 85902.57 00:06:22.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0x80000 00:06:22.253 Nvme2n1 : 5.08 1512.67 5.91 0.00 0.00 84005.28 16232.76 74610.22 00:06:22.253 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x80000 length 0x80000 00:06:22.253 Nvme2n1 : 5.07 1465.32 5.72 0.00 0.00 86617.99 17644.31 87919.06 00:06:22.253 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0x80000 00:06:22.253 Nvme2n2 : 5.08 1512.26 5.91 0.00 0.00 83895.89 16535.24 75820.11 00:06:22.253 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x80000 length 0x80000 00:06:22.253 Nvme2n2 : 5.08 1474.00 5.76 0.00 0.00 85926.32 5696.59 85902.57 00:06:22.253 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0x80000 00:06:22.253 Nvme2n3 : 5.08 1511.86 5.91 0.00 0.00 83754.88 16837.71 74610.22 00:06:22.253 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x80000 length 0x80000 00:06:22.253 Nvme2n3 : 5.08 1473.11 5.75 0.00 0.00 85783.03 7763.50 85499.27 00:06:22.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x0 length 0x20000 00:06:22.253 Nvme3n1 : 5.08 1510.98 5.90 0.00 0.00 83590.47 14216.27 75416.81 00:06:22.253 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:22.253 Verification LBA range: start 0x20000 length 0x20000 00:06:22.253 Nvme3n1 : 5.10 1481.84 5.79 0.00 0.00 85242.45 9578.34 89935.56 00:06:22.253 [2024-12-06T10:05:28.420Z] =================================================================================================================== 00:06:22.253 [2024-12-06T10:05:28.420Z] Total : 17901.46 69.93 0.00 0.00 85073.28 5696.59 91145.45 00:06:24.152 00:06:24.152 real 0m7.679s 00:06:24.152 user 0m14.375s 00:06:24.152 sys 0m0.238s 00:06:24.152 10:05:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.152 10:05:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:24.152 ************************************ 00:06:24.152 END TEST bdev_verify 00:06:24.152 ************************************ 00:06:24.152 10:05:29 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:24.152 10:05:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:24.152 10:05:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.152 10:05:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.152 ************************************ 00:06:24.152 START TEST bdev_verify_big_io 00:06:24.152 ************************************ 00:06:24.152 10:05:29 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:24.152 [2024-12-06 10:05:30.060617] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:24.153 [2024-12-06 10:05:30.060744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60659 ] 00:06:24.153 [2024-12-06 10:05:30.220778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.410 [2024-12-06 10:05:30.324896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.410 [2024-12-06 10:05:30.325023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.975 Running I/O for 5 seconds... 00:06:30.810 724.00 IOPS, 45.25 MiB/s [2024-12-06T10:05:37.234Z] 2410.50 IOPS, 150.66 MiB/s [2024-12-06T10:05:37.234Z] 2806.33 IOPS, 175.40 MiB/s 00:06:31.067 Latency(us) 00:06:31.067 [2024-12-06T10:05:37.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.067 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.067 Verification LBA range: start 0x0 length 0xbd0b 00:06:31.067 Nvme0n1 : 5.72 111.86 6.99 0.00 0.00 1097443.49 17644.31 1071160.71 00:06:31.067 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.067 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:31.067 Nvme0n1 : 5.82 106.28 6.64 0.00 0.00 1156544.01 27222.65 1677721.60 00:06:31.067 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.067 Verification LBA range: start 0x0 length 0xa000 00:06:31.067 Nvme1n1 : 5.83 113.92 7.12 0.00 0.00 1043193.12 64124.46 974369.08 00:06:31.067 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.067 Verification LBA range: start 0xa000 length 0xa000 00:06:31.067 Nvme1n1 : 5.83 106.76 6.67 0.00 0.00 1114255.46 51420.55 1716438.25 00:06:31.067 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.067 Verification LBA range: start 0x0 length 0x8000 00:06:31.067 Nvme2n1 : 5.83 115.05 7.19 0.00 0.00 1003999.20 106470.79 1019538.51 00:06:31.068 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x8000 length 0x8000 00:06:31.068 Nvme2n1 : 5.93 115.54 7.22 0.00 0.00 999846.36 72190.42 1497043.89 00:06:31.068 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x0 length 0x8000 00:06:31.068 Nvme2n2 : 5.88 119.82 7.49 0.00 0.00 941616.98 40329.85 1051802.39 00:06:31.068 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x8000 length 0x8000 00:06:31.068 Nvme2n2 : 5.93 121.24 7.58 0.00 0.00 925555.10 23592.96 1251838.42 00:06:31.068 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x0 length 0x8000 00:06:31.068 Nvme2n3 : 5.96 124.52 7.78 0.00 0.00 876141.18 38918.30 1084066.26 00:06:31.068 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x8000 length 0x8000 00:06:31.068 Nvme2n3 : 5.98 120.82 7.55 0.00 0.00 893920.68 29642.44 1845493.76 00:06:31.068 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x0 length 0x2000 00:06:31.068 Nvme3n1 : 5.97 139.33 8.71 0.00 0.00 765713.60 1701.42 1090519.04 00:06:31.068 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:31.068 Verification LBA range: start 0x2000 length 0x2000 00:06:31.068 Nvme3n1 : 6.02 154.11 9.63 0.00 0.00 682371.91 705.77 1342177.28 00:06:31.068 [2024-12-06T10:05:37.235Z] =================================================================================================================== 00:06:31.068 [2024-12-06T10:05:37.235Z] Total : 1449.25 90.58 0.00 0.00 942564.92 705.77 1845493.76 00:06:33.065 00:06:33.065 real 0m9.084s 00:06:33.065 user 0m17.218s 00:06:33.065 sys 0m0.247s 00:06:33.065 ************************************ 00:06:33.065 END TEST bdev_verify_big_io 00:06:33.065 ************************************ 00:06:33.065 10:05:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.065 10:05:39 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:33.065 10:05:39 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.065 10:05:39 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:33.065 10:05:39 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.065 10:05:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:33.065 ************************************ 00:06:33.065 START TEST bdev_write_zeroes 00:06:33.065 ************************************ 00:06:33.065 10:05:39 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:33.065 [2024-12-06 10:05:39.208649] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:33.065 [2024-12-06 10:05:39.208769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60768 ] 00:06:33.322 [2024-12-06 10:05:39.370014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.322 [2024-12-06 10:05:39.486000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.255 Running I/O for 1 seconds... 00:06:35.186 46464.00 IOPS, 181.50 MiB/s 00:06:35.186 Latency(us) 00:06:35.186 [2024-12-06T10:05:41.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.186 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme0n1 : 1.04 7640.79 29.85 0.00 0.00 16697.48 5595.77 37506.76 00:06:35.186 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme1n1 : 1.04 7623.20 29.78 0.00 0.00 16711.97 11342.77 37305.11 00:06:35.186 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme2n1 : 1.04 7605.60 29.71 0.00 0.00 16640.45 8368.44 36095.21 00:06:35.186 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme2n2 : 1.05 7588.18 29.64 0.00 0.00 16650.60 10233.70 35490.26 00:06:35.186 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme2n3 : 1.05 7570.93 29.57 0.00 0.00 16648.31 9628.75 35086.97 00:06:35.186 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:35.186 Nvme3n1 : 1.05 7553.45 29.51 0.00 0.00 16643.03 9023.80 38111.70 00:06:35.186 [2024-12-06T10:05:41.353Z] =================================================================================================================== 00:06:35.186 [2024-12-06T10:05:41.353Z] Total : 45582.14 178.06 0.00 0.00 16665.31 5595.77 38111.70 00:06:35.751 00:06:35.751 real 0m2.748s 00:06:35.751 user 0m2.439s 00:06:35.751 sys 0m0.189s 00:06:35.751 10:05:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.751 ************************************ 00:06:35.751 END TEST bdev_write_zeroes 00:06:35.751 ************************************ 00:06:35.751 10:05:41 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:36.009 10:05:41 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.009 10:05:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:36.009 10:05:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.009 10:05:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.009 ************************************ 00:06:36.009 START TEST bdev_json_nonenclosed 00:06:36.009 ************************************ 00:06:36.009 10:05:41 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.009 [2024-12-06 10:05:42.017962] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:36.009 [2024-12-06 10:05:42.018090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:06:36.268 [2024-12-06 10:05:42.178252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.268 [2024-12-06 10:05:42.280999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.268 [2024-12-06 10:05:42.281083] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:36.268 [2024-12-06 10:05:42.281100] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:36.268 [2024-12-06 10:05:42.281111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.533 00:06:36.533 real 0m0.496s 00:06:36.533 user 0m0.308s 00:06:36.533 sys 0m0.084s 00:06:36.533 10:05:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.533 10:05:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:36.533 ************************************ 00:06:36.533 END TEST bdev_json_nonenclosed 00:06:36.533 ************************************ 00:06:36.533 10:05:42 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.533 10:05:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:36.533 10:05:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.533 10:05:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.533 ************************************ 00:06:36.533 START TEST bdev_json_nonarray 00:06:36.533 ************************************ 00:06:36.533 10:05:42 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.533 [2024-12-06 10:05:42.577234] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:36.533 [2024-12-06 10:05:42.577392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60847 ] 00:06:36.791 [2024-12-06 10:05:42.738221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.791 [2024-12-06 10:05:42.839109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.791 [2024-12-06 10:05:42.839199] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:36.791 [2024-12-06 10:05:42.839217] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:36.791 [2024-12-06 10:05:42.839226] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.049 00:06:37.049 real 0m0.504s 00:06:37.049 user 0m0.305s 00:06:37.049 sys 0m0.094s 00:06:37.049 10:05:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.049 ************************************ 00:06:37.049 END TEST bdev_json_nonarray 00:06:37.049 ************************************ 00:06:37.049 10:05:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:37.049 10:05:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:37.049 00:06:37.049 real 0m39.294s 00:06:37.049 user 0m59.930s 00:06:37.049 sys 0m5.564s 00:06:37.049 ************************************ 00:06:37.049 10:05:43 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.049 10:05:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.049 END TEST blockdev_nvme 00:06:37.049 ************************************ 00:06:37.049 10:05:43 -- spdk/autotest.sh@209 -- # uname -s 00:06:37.049 10:05:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:37.049 10:05:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.049 10:05:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.049 10:05:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.049 10:05:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.049 ************************************ 00:06:37.049 START TEST blockdev_nvme_gpt 00:06:37.049 ************************************ 00:06:37.049 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:37.049 * Looking for test storage... 00:06:37.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:37.049 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:37.049 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:37.049 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.308 10:05:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:37.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.308 --rc genhtml_branch_coverage=1 00:06:37.308 --rc genhtml_function_coverage=1 00:06:37.308 --rc genhtml_legend=1 00:06:37.308 --rc geninfo_all_blocks=1 00:06:37.308 --rc geninfo_unexecuted_blocks=1 00:06:37.308 00:06:37.308 ' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:37.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.308 --rc genhtml_branch_coverage=1 00:06:37.308 --rc genhtml_function_coverage=1 00:06:37.308 --rc genhtml_legend=1 00:06:37.308 --rc geninfo_all_blocks=1 00:06:37.308 --rc geninfo_unexecuted_blocks=1 00:06:37.308 00:06:37.308 ' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:37.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.308 --rc genhtml_branch_coverage=1 00:06:37.308 --rc genhtml_function_coverage=1 00:06:37.308 --rc genhtml_legend=1 00:06:37.308 --rc geninfo_all_blocks=1 00:06:37.308 --rc geninfo_unexecuted_blocks=1 00:06:37.308 00:06:37.308 ' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:37.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.308 --rc genhtml_branch_coverage=1 00:06:37.308 --rc genhtml_function_coverage=1 00:06:37.308 --rc genhtml_legend=1 00:06:37.308 --rc geninfo_all_blocks=1 00:06:37.308 --rc geninfo_unexecuted_blocks=1 00:06:37.308 00:06:37.308 ' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60925 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:37.308 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60925 00:06:37.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60925 ']' 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.308 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.309 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.309 10:05:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:37.309 10:05:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:37.309 [2024-12-06 10:05:43.367943] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:37.309 [2024-12-06 10:05:43.368064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60925 ] 00:06:37.566 [2024-12-06 10:05:43.525296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.566 [2024-12-06 10:05:43.647837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.129 10:05:44 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.129 10:05:44 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:38.129 10:05:44 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:38.129 10:05:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:38.129 10:05:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:38.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:38.644 Waiting for block devices as requested 00:06:38.644 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:38.901 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:38.901 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:38.901 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:44.157 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:44.157 BYT; 00:06:44.157 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:44.157 BYT; 00:06:44.157 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:44.157 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.157 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.158 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:44.158 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.158 10:05:50 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.158 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:44.158 10:05:50 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:45.093 The operation has completed successfully. 00:06:45.093 10:05:51 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:46.470 The operation has completed successfully. 00:06:46.470 10:05:52 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:46.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:47.296 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.296 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.296 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.296 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:47.296 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:47.296 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.296 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.296 [] 00:06:47.296 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.296 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:47.296 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:47.296 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:47.296 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:47.554 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:47.554 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.554 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:47.812 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:47.812 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:47.813 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b5db13c6-5749-4ecd-88ab-13c4aa118f13"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b5db13c6-5749-4ecd-88ab-13c4aa118f13",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "b0609983-610a-4c64-b1ef-ae0132931995"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0609983-610a-4c64-b1ef-ae0132931995",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fe498718-d4ae-4196-9d99-2634d738cc36"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe498718-d4ae-4196-9d99-2634d738cc36",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "072c5df3-5a01-4b23-9ea5-92fa75c61bed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "072c5df3-5a01-4b23-9ea5-92fa75c61bed",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cb4d94ee-7b31-4ef6-a179-efd7c7f6261b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb4d94ee-7b31-4ef6-a179-efd7c7f6261b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:47.813 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:47.813 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:47.813 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:47.813 10:05:53 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60925 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60925 ']' 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60925 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60925 00:06:47.813 killing process with pid 60925 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60925' 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60925 00:06:47.813 10:05:53 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60925 00:06:49.706 10:05:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:49.706 10:05:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:49.706 10:05:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:49.706 10:05:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.706 10:05:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:49.706 ************************************ 00:06:49.706 START TEST bdev_hello_world 00:06:49.706 ************************************ 00:06:49.706 10:05:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:49.706 [2024-12-06 10:05:55.571786] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:49.706 [2024-12-06 10:05:55.571933] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61550 ] 00:06:49.706 [2024-12-06 10:05:55.734001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.706 [2024-12-06 10:05:55.836844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.270 [2024-12-06 10:05:56.387638] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:50.270 [2024-12-06 10:05:56.387691] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:50.270 [2024-12-06 10:05:56.387719] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:50.270 [2024-12-06 10:05:56.390201] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:50.270 [2024-12-06 10:05:56.391345] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:50.270 [2024-12-06 10:05:56.391389] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:50.270 [2024-12-06 10:05:56.392133] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:50.270 00:06:50.270 [2024-12-06 10:05:56.392164] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:51.246 00:06:51.246 real 0m1.617s 00:06:51.246 user 0m1.324s 00:06:51.246 sys 0m0.185s 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:51.246 ************************************ 00:06:51.246 END TEST bdev_hello_world 00:06:51.246 ************************************ 00:06:51.246 10:05:57 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:51.246 10:05:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:51.246 10:05:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.246 10:05:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.246 ************************************ 00:06:51.246 START TEST bdev_bounds 00:06:51.246 ************************************ 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:51.246 Process bdevio pid: 61587 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61587 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61587' 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61587 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61587 ']' 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:51.246 10:05:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:51.247 [2024-12-06 10:05:57.258559] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:51.247 [2024-12-06 10:05:57.258683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61587 ] 00:06:51.503 [2024-12-06 10:05:57.422364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.503 [2024-12-06 10:05:57.528872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.503 [2024-12-06 10:05:57.529471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.503 [2024-12-06 10:05:57.529514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.067 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.067 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:52.067 10:05:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:52.067 I/O targets: 00:06:52.067 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:52.067 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:52.067 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:52.067 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.067 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.067 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:52.067 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:52.067 00:06:52.067 00:06:52.067 CUnit - A unit testing framework for C - Version 2.1-3 00:06:52.067 http://cunit.sourceforge.net/ 00:06:52.067 00:06:52.067 00:06:52.067 Suite: bdevio tests on: Nvme3n1 00:06:52.067 Test: blockdev write read block ...passed 00:06:52.067 Test: blockdev write zeroes read block ...passed 00:06:52.067 Test: blockdev write zeroes read no split ...passed 00:06:52.067 Test: blockdev write zeroes read split ...passed 00:06:52.325 Test: blockdev write zeroes read split partial ...passed 00:06:52.325 Test: blockdev reset ...[2024-12-06 10:05:58.249183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:52.325 [2024-12-06 10:05:58.253271] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:52.325 passed 00:06:52.325 Test: blockdev write read 8 blocks ...passed 00:06:52.325 Test: blockdev write read size > 128k ...passed 00:06:52.325 Test: blockdev write read invalid size ...passed 00:06:52.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.325 Test: blockdev write read max offset ...passed 00:06:52.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.325 Test: blockdev writev readv 8 blocks ...passed 00:06:52.325 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.325 Test: blockdev writev readv block ...passed 00:06:52.325 Test: blockdev writev readv size > 128k ...passed 00:06:52.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.325 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.273848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8204000 len:0x1000 00:06:52.325 [2024-12-06 10:05:58.273984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:passed 00:06:52.325 Test: blockdev nvme passthru rw ...0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.325 passed 00:06:52.325 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:58.275846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.325 [2024-12-06 10:05:58.275929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.325 passed 00:06:52.325 Test: blockdev nvme admin passthru ...passed 00:06:52.325 Test: blockdev copy ...passed 00:06:52.325 Suite: bdevio tests on: Nvme2n3 00:06:52.325 Test: blockdev write read block ...passed 00:06:52.325 Test: blockdev write zeroes read block ...passed 00:06:52.325 Test: blockdev write zeroes read no split ...passed 00:06:52.325 Test: blockdev write zeroes read split ...passed 00:06:52.325 Test: blockdev write zeroes read split partial ...passed 00:06:52.325 Test: blockdev reset ...[2024-12-06 10:05:58.333343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.325 [2024-12-06 10:05:58.337402] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.325 passed 00:06:52.325 Test: blockdev write read 8 blocks ...passed 00:06:52.325 Test: blockdev write read size > 128k ...passed 00:06:52.325 Test: blockdev write read invalid size ...passed 00:06:52.325 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.325 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.325 Test: blockdev write read max offset ...passed 00:06:52.325 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.325 Test: blockdev writev readv 8 blocks ...passed 00:06:52.325 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.325 Test: blockdev writev readv block ...passed 00:06:52.325 Test: blockdev writev readv size > 128k ...passed 00:06:52.325 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.325 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.357962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c8202000 len:0x1000 00:06:52.325 [2024-12-06 10:05:58.358087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.325 passed 00:06:52.325 Test: blockdev nvme passthru rw ...passed 00:06:52.325 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.325 Test: blockdev nvme admin passthru ...[2024-12-06 10:05:58.360809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.325 [2024-12-06 10:05:58.360873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.325 passed 00:06:52.325 Test: blockdev copy ...passed 00:06:52.325 Suite: bdevio tests on: Nvme2n2 00:06:52.325 Test: blockdev write read block ...passed 00:06:52.325 Test: blockdev write zeroes read block ...passed 00:06:52.325 Test: blockdev write zeroes read no split ...passed 00:06:52.325 Test: blockdev write zeroes read split ...passed 00:06:52.325 Test: blockdev write zeroes read split partial ...passed 00:06:52.325 Test: blockdev reset ...[2024-12-06 10:05:58.420436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.325 [2024-12-06 10:05:58.428235] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.325 passed 00:06:52.325 Test: blockdev write read 8 blocks ...passed 00:06:52.325 Test: blockdev write read size > 128k ...passed 00:06:52.325 Test: blockdev write read invalid size ...passed 00:06:52.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.326 Test: blockdev write read max offset ...passed 00:06:52.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.326 Test: blockdev writev readv 8 blocks ...passed 00:06:52.326 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.326 Test: blockdev writev readv block ...passed 00:06:52.326 Test: blockdev writev readv size > 128k ...passed 00:06:52.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.326 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.449864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2e38000 len:0x1000 00:06:52.326 [2024-12-06 10:05:58.449916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.326 passed 00:06:52.326 Test: blockdev nvme passthru rw ...passed 00:06:52.326 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:58.452814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.326 [2024-12-06 10:05:58.452849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.326 passed 00:06:52.326 Test: blockdev nvme admin passthru ...passed 00:06:52.326 Test: blockdev copy ...passed 00:06:52.326 Suite: bdevio tests on: Nvme2n1 00:06:52.326 Test: blockdev write read block ...passed 00:06:52.326 Test: blockdev write zeroes read block ...passed 00:06:52.326 Test: blockdev write zeroes read no split ...passed 00:06:52.582 Test: blockdev write zeroes read split ...passed 00:06:52.582 Test: blockdev write zeroes read split partial ...passed 00:06:52.582 Test: blockdev reset ...[2024-12-06 10:05:58.514836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:52.582 [2024-12-06 10:05:58.518870] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:52.582 passed 00:06:52.582 Test: blockdev write read 8 blocks ...passed 00:06:52.582 Test: blockdev write read size > 128k ...passed 00:06:52.582 Test: blockdev write read invalid size ...passed 00:06:52.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.582 Test: blockdev write read max offset ...passed 00:06:52.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.582 Test: blockdev writev readv 8 blocks ...passed 00:06:52.582 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.582 Test: blockdev writev readv block ...passed 00:06:52.582 Test: blockdev writev readv size > 128k ...passed 00:06:52.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.582 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.536557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2e34000 len:0x1000 00:06:52.582 [2024-12-06 10:05:58.536606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.582 passed 00:06:52.582 Test: blockdev nvme passthru rw ...passed 00:06:52.582 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:58.539177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:52.582 passed 00:06:52.582 Test: blockdev nvme admin passthru ...[2024-12-06 10:05:58.539209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:52.582 passed 00:06:52.582 Test: blockdev copy ...passed 00:06:52.582 Suite: bdevio tests on: Nvme1n1p2 00:06:52.582 Test: blockdev write read block ...passed 00:06:52.582 Test: blockdev write zeroes read block ...passed 00:06:52.582 Test: blockdev write zeroes read no split ...passed 00:06:52.582 Test: blockdev write zeroes read split ...passed 00:06:52.582 Test: blockdev write zeroes read split partial ...passed 00:06:52.582 Test: blockdev reset ...[2024-12-06 10:05:58.600027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:52.582 [2024-12-06 10:05:58.604848] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:52.582 passed 00:06:52.582 Test: blockdev write read 8 blocks ...passed 00:06:52.582 Test: blockdev write read size > 128k ...passed 00:06:52.582 Test: blockdev write read invalid size ...passed 00:06:52.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.582 Test: blockdev write read max offset ...passed 00:06:52.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.582 Test: blockdev writev readv 8 blocks ...passed 00:06:52.582 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.582 Test: blockdev writev readv block ...passed 00:06:52.582 Test: blockdev writev readv size > 128k ...passed 00:06:52.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.582 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.623775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c2e30000 len:0x1000 00:06:52.582 [2024-12-06 10:05:58.623817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.582 passed 00:06:52.582 Test: blockdev nvme passthru rw ...passed 00:06:52.582 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.582 Test: blockdev nvme admin passthru ...passed 00:06:52.582 Test: blockdev copy ...passed 00:06:52.582 Suite: bdevio tests on: Nvme1n1p1 00:06:52.582 Test: blockdev write read block ...passed 00:06:52.582 Test: blockdev write zeroes read block ...passed 00:06:52.582 Test: blockdev write zeroes read no split ...passed 00:06:52.582 Test: blockdev write zeroes read split ...passed 00:06:52.582 Test: blockdev write zeroes read split partial ...passed 00:06:52.582 Test: blockdev reset ...[2024-12-06 10:05:58.676339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:52.582 [2024-12-06 10:05:58.681322] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:52.582 passed 00:06:52.582 Test: blockdev write read 8 blocks ...passed 00:06:52.582 Test: blockdev write read size > 128k ...passed 00:06:52.582 Test: blockdev write read invalid size ...passed 00:06:52.582 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.582 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.582 Test: blockdev write read max offset ...passed 00:06:52.582 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.582 Test: blockdev writev readv 8 blocks ...passed 00:06:52.582 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.582 Test: blockdev writev readv block ...passed 00:06:52.582 Test: blockdev writev readv size > 128k ...passed 00:06:52.582 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.582 Test: blockdev comparev and writev ...[2024-12-06 10:05:58.700289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c840e000 len:0x1000 00:06:52.582 [2024-12-06 10:05:58.700329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:52.582 passed 00:06:52.582 Test: blockdev nvme passthru rw ...passed 00:06:52.582 Test: blockdev nvme passthru vendor specific ...passed 00:06:52.582 Test: blockdev nvme admin passthru ...passed 00:06:52.582 Test: blockdev copy ...passed 00:06:52.582 Suite: bdevio tests on: Nvme0n1 00:06:52.582 Test: blockdev write read block ...passed 00:06:52.582 Test: blockdev write zeroes read block ...passed 00:06:52.582 Test: blockdev write zeroes read no split ...passed 00:06:52.582 Test: blockdev write zeroes read split ...passed 00:06:52.840 Test: blockdev write zeroes read split partial ...passed 00:06:52.840 Test: blockdev reset ...[2024-12-06 10:05:58.756226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:52.840 [2024-12-06 10:05:58.760578] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:52.840 passed 00:06:52.840 Test: blockdev write read 8 blocks ...passed 00:06:52.840 Test: blockdev write read size > 128k ...passed 00:06:52.840 Test: blockdev write read invalid size ...passed 00:06:52.840 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:52.840 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:52.840 Test: blockdev write read max offset ...passed 00:06:52.840 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:52.840 Test: blockdev writev readv 8 blocks ...passed 00:06:52.840 Test: blockdev writev readv 30 x 1block ...passed 00:06:52.840 Test: blockdev writev readv block ...passed 00:06:52.840 Test: blockdev writev readv size > 128k ...passed 00:06:52.840 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:52.840 Test: blockdev comparev and writev ...passed 00:06:52.840 Test: blockdev nvme passthru rw ...[2024-12-06 10:05:58.777644] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:52.840 separate metadata which is not supported yet. 00:06:52.840 passed 00:06:52.840 Test: blockdev nvme passthru vendor specific ...[2024-12-06 10:05:58.779461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:52.840 [2024-12-06 10:05:58.779518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:52.840 passed 00:06:52.840 Test: blockdev nvme admin passthru ...passed 00:06:52.840 Test: blockdev copy ...passed 00:06:52.840 00:06:52.840 Run Summary: Type Total Ran Passed Failed Inactive 00:06:52.840 suites 7 7 n/a 0 0 00:06:52.840 tests 161 161 161 0 0 00:06:52.840 asserts 1025 1025 1025 0 n/a 00:06:52.840 00:06:52.840 Elapsed time = 1.476 seconds 00:06:52.840 0 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61587 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61587 ']' 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61587 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61587 00:06:52.840 killing process with pid 61587 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61587' 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61587 00:06:52.840 10:05:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61587 00:06:53.406 10:05:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:53.406 00:06:53.406 real 0m2.327s 00:06:53.406 user 0m5.858s 00:06:53.406 sys 0m0.284s 00:06:53.406 ************************************ 00:06:53.406 END TEST bdev_bounds 00:06:53.406 ************************************ 00:06:53.406 10:05:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.406 10:05:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 10:05:59 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.664 10:05:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:53.664 10:05:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.664 10:05:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 ************************************ 00:06:53.664 START TEST bdev_nbd 00:06:53.664 ************************************ 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61646 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61646 /var/tmp/spdk-nbd.sock 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61646 ']' 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:53.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.664 10:05:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:53.664 [2024-12-06 10:05:59.656383] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:06:53.664 [2024-12-06 10:05:59.656513] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.664 [2024-12-06 10:05:59.818493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.922 [2024-12-06 10:05:59.918077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.520 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.521 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:54.778 1+0 records in 00:06:54.778 1+0 records out 00:06:54.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843622 s, 4.9 MB/s 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:54.778 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.036 1+0 records in 00:06:55.036 1+0 records out 00:06:55.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084667 s, 4.8 MB/s 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.036 10:06:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.294 1+0 records in 00:06:55.294 1+0 records out 00:06:55.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108104 s, 3.8 MB/s 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.294 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.294 1+0 records in 00:06:55.294 1+0 records out 00:06:55.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131503 s, 3.1 MB/s 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.553 1+0 records in 00:06:55.553 1+0 records out 00:06:55.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126669 s, 3.2 MB/s 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.553 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:55.811 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:55.812 1+0 records in 00:06:55.812 1+0 records out 00:06:55.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588386 s, 7.0 MB/s 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:55.812 10:06:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:56.070 1+0 records in 00:06:56.070 1+0 records out 00:06:56.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927619 s, 4.4 MB/s 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:56.070 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd0", 00:06:56.328 "bdev_name": "Nvme0n1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd1", 00:06:56.328 "bdev_name": "Nvme1n1p1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd2", 00:06:56.328 "bdev_name": "Nvme1n1p2" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd3", 00:06:56.328 "bdev_name": "Nvme2n1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd4", 00:06:56.328 "bdev_name": "Nvme2n2" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd5", 00:06:56.328 "bdev_name": "Nvme2n3" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd6", 00:06:56.328 "bdev_name": "Nvme3n1" 00:06:56.328 } 00:06:56.328 ]' 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd0", 00:06:56.328 "bdev_name": "Nvme0n1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd1", 00:06:56.328 "bdev_name": "Nvme1n1p1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd2", 00:06:56.328 "bdev_name": "Nvme1n1p2" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd3", 00:06:56.328 "bdev_name": "Nvme2n1" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd4", 00:06:56.328 "bdev_name": "Nvme2n2" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd5", 00:06:56.328 "bdev_name": "Nvme2n3" 00:06:56.328 }, 00:06:56.328 { 00:06:56.328 "nbd_device": "/dev/nbd6", 00:06:56.328 "bdev_name": "Nvme3n1" 00:06:56.328 } 00:06:56.328 ]' 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.328 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.585 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.843 10:06:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.101 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.359 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.617 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.875 10:06:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.132 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:58.387 /dev/nbd0 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.387 1+0 records in 00:06:58.387 1+0 records out 00:06:58.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657156 s, 6.2 MB/s 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.387 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:58.644 /dev/nbd1 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.644 1+0 records in 00:06:58.644 1+0 records out 00:06:58.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134928 s, 3.0 MB/s 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.644 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:58.902 /dev/nbd10 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.902 1+0 records in 00:06:58.902 1+0 records out 00:06:58.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118776 s, 3.4 MB/s 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:58.902 10:06:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:59.157 /dev/nbd11 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.157 1+0 records in 00:06:59.157 1+0 records out 00:06:59.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115777 s, 3.5 MB/s 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.157 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:59.412 /dev/nbd12 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.412 1+0 records in 00:06:59.412 1+0 records out 00:06:59.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120922 s, 3.4 MB/s 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.412 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.413 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:59.670 /dev/nbd13 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.670 1+0 records in 00:06:59.670 1+0 records out 00:06:59.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121974 s, 3.4 MB/s 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.670 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:59.927 /dev/nbd14 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:59.927 1+0 records in 00:06:59.927 1+0 records out 00:06:59.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137105 s, 3.0 MB/s 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.927 10:06:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd0", 00:07:00.184 "bdev_name": "Nvme0n1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd1", 00:07:00.184 "bdev_name": "Nvme1n1p1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd10", 00:07:00.184 "bdev_name": "Nvme1n1p2" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd11", 00:07:00.184 "bdev_name": "Nvme2n1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd12", 00:07:00.184 "bdev_name": "Nvme2n2" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd13", 00:07:00.184 "bdev_name": "Nvme2n3" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd14", 00:07:00.184 "bdev_name": "Nvme3n1" 00:07:00.184 } 00:07:00.184 ]' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd0", 00:07:00.184 "bdev_name": "Nvme0n1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd1", 00:07:00.184 "bdev_name": "Nvme1n1p1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd10", 00:07:00.184 "bdev_name": "Nvme1n1p2" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd11", 00:07:00.184 "bdev_name": "Nvme2n1" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd12", 00:07:00.184 "bdev_name": "Nvme2n2" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd13", 00:07:00.184 "bdev_name": "Nvme2n3" 00:07:00.184 }, 00:07:00.184 { 00:07:00.184 "nbd_device": "/dev/nbd14", 00:07:00.184 "bdev_name": "Nvme3n1" 00:07:00.184 } 00:07:00.184 ]' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:00.184 /dev/nbd1 00:07:00.184 /dev/nbd10 00:07:00.184 /dev/nbd11 00:07:00.184 /dev/nbd12 00:07:00.184 /dev/nbd13 00:07:00.184 /dev/nbd14' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:00.184 /dev/nbd1 00:07:00.184 /dev/nbd10 00:07:00.184 /dev/nbd11 00:07:00.184 /dev/nbd12 00:07:00.184 /dev/nbd13 00:07:00.184 /dev/nbd14' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:00.184 256+0 records in 00:07:00.184 256+0 records out 00:07:00.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00966708 s, 108 MB/s 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.184 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:00.441 256+0 records in 00:07:00.441 256+0 records out 00:07:00.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23502 s, 4.5 MB/s 00:07:00.441 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.441 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:00.441 256+0 records in 00:07:00.441 256+0 records out 00:07:00.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208508 s, 5.0 MB/s 00:07:00.441 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.441 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:00.698 256+0 records in 00:07:00.698 256+0 records out 00:07:00.698 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.253806 s, 4.1 MB/s 00:07:00.698 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:00.698 10:06:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:00.955 256+0 records in 00:07:00.955 256+0 records out 00:07:00.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.256913 s, 4.1 MB/s 00:07:01.212 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.212 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:01.212 256+0 records in 00:07:01.212 256+0 records out 00:07:01.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.195817 s, 5.4 MB/s 00:07:01.212 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.212 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:01.547 256+0 records in 00:07:01.547 256+0 records out 00:07:01.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.234022 s, 4.5 MB/s 00:07:01.547 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.547 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:01.848 256+0 records in 00:07:01.848 256+0 records out 00:07:01.848 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245148 s, 4.3 MB/s 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.848 10:06:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.106 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.364 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.621 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.878 10:06:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.135 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.392 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:03.649 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:03.906 malloc_lvol_verify 00:07:03.906 10:06:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:03.906 a24746f1-0af1-4cb3-95b0-ffb4ed7b9a10 00:07:04.165 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:04.165 295cbfbc-9b2c-4cf4-bc9a-b49b4080100f 00:07:04.165 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:04.423 /dev/nbd0 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:04.423 mke2fs 1.47.0 (5-Feb-2023) 00:07:04.423 Discarding device blocks: 0/4096 done 00:07:04.423 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:04.423 00:07:04.423 Allocating group tables: 0/1 done 00:07:04.423 Writing inode tables: 0/1 done 00:07:04.423 Creating journal (1024 blocks): done 00:07:04.423 Writing superblocks and filesystem accounting information: 0/1 done 00:07:04.423 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.423 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61646 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61646 ']' 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61646 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61646 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.681 killing process with pid 61646 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61646' 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61646 00:07:04.681 10:06:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61646 00:07:05.616 10:06:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:05.616 00:07:05.616 real 0m11.980s 00:07:05.616 user 0m16.299s 00:07:05.616 sys 0m3.957s 00:07:05.616 10:06:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.616 ************************************ 00:07:05.616 END TEST bdev_nbd 00:07:05.616 ************************************ 00:07:05.616 10:06:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:05.616 skipping fio tests on NVMe due to multi-ns failures. 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.616 10:06:11 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.616 10:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:05.616 10:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.616 10:06:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:05.616 ************************************ 00:07:05.616 START TEST bdev_verify 00:07:05.617 ************************************ 00:07:05.617 10:06:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:05.617 [2024-12-06 10:06:11.719991] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:05.617 [2024-12-06 10:06:11.720168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62071 ] 00:07:05.874 [2024-12-06 10:06:11.895674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.874 [2024-12-06 10:06:11.998862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.874 [2024-12-06 10:06:11.998983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.439 Running I/O for 5 seconds... 00:07:08.740 17344.00 IOPS, 67.75 MiB/s [2024-12-06T10:06:16.306Z] 18240.00 IOPS, 71.25 MiB/s [2024-12-06T10:06:16.870Z] 18730.67 IOPS, 73.17 MiB/s [2024-12-06T10:06:17.801Z] 19040.00 IOPS, 74.38 MiB/s [2024-12-06T10:06:17.801Z] 19276.80 IOPS, 75.30 MiB/s 00:07:11.634 Latency(us) 00:07:11.634 [2024-12-06T10:06:17.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.634 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0xbd0bd 00:07:11.634 Nvme0n1 : 5.09 1332.99 5.21 0.00 0.00 95631.70 22080.59 96791.63 00:07:11.634 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:11.634 Nvme0n1 : 5.05 1368.55 5.35 0.00 0.00 93174.18 20669.05 95581.74 00:07:11.634 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x4ff80 00:07:11.634 Nvme1n1p1 : 5.09 1332.26 5.20 0.00 0.00 95484.75 23592.96 95581.74 00:07:11.634 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:11.634 Nvme1n1p1 : 5.05 1368.15 5.34 0.00 0.00 93000.99 23088.84 86709.17 00:07:11.634 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x4ff7f 00:07:11.634 Nvme1n1p2 : 5.09 1331.88 5.20 0.00 0.00 95326.16 25407.80 94371.84 00:07:11.634 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:11.634 Nvme1n1p2 : 5.07 1374.89 5.37 0.00 0.00 92300.71 7410.61 77433.30 00:07:11.634 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x80000 00:07:11.634 Nvme2n1 : 5.09 1331.52 5.20 0.00 0.00 95180.31 26214.40 89532.26 00:07:11.634 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x80000 length 0x80000 00:07:11.634 Nvme2n1 : 5.08 1374.51 5.37 0.00 0.00 92156.30 6301.54 79046.50 00:07:11.634 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x80000 00:07:11.634 Nvme2n2 : 5.10 1331.15 5.20 0.00 0.00 94963.77 26012.75 91548.75 00:07:11.634 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x80000 length 0x80000 00:07:11.634 Nvme2n2 : 5.09 1384.26 5.41 0.00 0.00 91505.55 8015.56 79449.80 00:07:11.634 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x80000 00:07:11.634 Nvme2n3 : 5.10 1330.80 5.20 0.00 0.00 94706.78 19257.50 94775.14 00:07:11.634 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x80000 length 0x80000 00:07:11.634 Nvme2n3 : 5.09 1383.89 5.41 0.00 0.00 91371.90 8469.27 79449.80 00:07:11.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x0 length 0x20000 00:07:11.634 Nvme3n1 : 5.11 1340.69 5.24 0.00 0.00 93957.94 1903.06 97598.23 00:07:11.634 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:11.634 Verification LBA range: start 0x20000 length 0x20000 00:07:11.634 Nvme3n1 : 5.09 1383.48 5.40 0.00 0.00 91225.05 8721.33 79449.80 00:07:11.634 [2024-12-06T10:06:17.801Z] =================================================================================================================== 00:07:11.634 [2024-12-06T10:06:17.801Z] Total : 18969.05 74.10 0.00 0.00 93546.53 1903.06 97598.23 00:07:13.002 00:07:13.002 real 0m7.333s 00:07:13.002 user 0m13.667s 00:07:13.002 sys 0m0.234s 00:07:13.002 10:06:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.002 ************************************ 00:07:13.002 END TEST bdev_verify 00:07:13.002 ************************************ 00:07:13.002 10:06:18 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.002 10:06:19 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.002 10:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:13.002 10:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.002 10:06:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.002 ************************************ 00:07:13.002 START TEST bdev_verify_big_io 00:07:13.002 ************************************ 00:07:13.002 10:06:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.002 [2024-12-06 10:06:19.097659] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:13.002 [2024-12-06 10:06:19.097784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:07:13.259 [2024-12-06 10:06:19.258944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.259 [2024-12-06 10:06:19.363984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.259 [2024-12-06 10:06:19.364166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.191 Running I/O for 5 seconds... 00:07:18.370 1094.00 IOPS, 68.38 MiB/s [2024-12-06T10:06:26.433Z] 1548.00 IOPS, 96.75 MiB/s [2024-12-06T10:06:26.433Z] 2122.67 IOPS, 132.67 MiB/s 00:07:20.266 Latency(us) 00:07:20.266 [2024-12-06T10:06:26.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.266 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.266 Verification LBA range: start 0x0 length 0xbd0b 00:07:20.267 Nvme0n1 : 5.94 87.56 5.47 0.00 0.00 1379941.33 50210.66 1406705.03 00:07:20.267 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:20.267 Nvme0n1 : 5.92 91.53 5.72 0.00 0.00 1332479.14 27021.00 1406705.03 00:07:20.267 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x4ff8 00:07:20.267 Nvme1n1p1 : 5.94 90.55 5.66 0.00 0.00 1303328.69 129055.51 1213121.77 00:07:20.267 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:20.267 Nvme1n1p1 : 5.93 90.28 5.64 0.00 0.00 1294288.04 92355.35 1187310.67 00:07:20.267 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x4ff7 00:07:20.267 Nvme1n1p2 : 6.07 94.92 5.93 0.00 0.00 1216367.76 123409.33 1090519.04 00:07:20.267 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:20.267 Nvme1n1p2 : 6.02 95.70 5.98 0.00 0.00 1200916.39 89532.26 1135688.47 00:07:20.267 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x8000 00:07:20.267 Nvme2n1 : 6.07 94.88 5.93 0.00 0.00 1172571.37 124215.93 1090519.04 00:07:20.267 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x8000 length 0x8000 00:07:20.267 Nvme2n1 : 6.16 99.18 6.20 0.00 0.00 1116916.61 65334.35 1167952.34 00:07:20.267 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x8000 00:07:20.267 Nvme2n2 : 6.20 99.35 6.21 0.00 0.00 1084829.65 89128.96 1122782.92 00:07:20.267 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x8000 length 0x8000 00:07:20.267 Nvme2n2 : 6.16 103.84 6.49 0.00 0.00 1040549.42 70980.53 1193763.45 00:07:20.267 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x8000 00:07:20.267 Nvme2n3 : 6.25 100.73 6.30 0.00 0.00 1043868.13 28230.89 2206849.18 00:07:20.267 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x8000 length 0x8000 00:07:20.267 Nvme2n3 : 6.20 107.84 6.74 0.00 0.00 966568.81 37506.76 1284102.30 00:07:20.267 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x0 length 0x2000 00:07:20.267 Nvme3n1 : 6.27 109.02 6.81 0.00 0.00 932559.23 8116.38 2245565.83 00:07:20.267 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:20.267 Verification LBA range: start 0x2000 length 0x2000 00:07:20.267 Nvme3n1 : 6.27 122.55 7.66 0.00 0.00 825726.46 6604.01 1251838.42 00:07:20.267 [2024-12-06T10:06:26.434Z] =================================================================================================================== 00:07:20.267 [2024-12-06T10:06:26.434Z] Total : 1387.92 86.75 0.00 0.00 1119478.18 6604.01 2245565.83 00:07:22.198 00:07:22.198 real 0m9.060s 00:07:22.198 user 0m17.154s 00:07:22.198 sys 0m0.241s 00:07:22.198 10:06:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.198 10:06:28 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:22.198 ************************************ 00:07:22.198 END TEST bdev_verify_big_io 00:07:22.198 ************************************ 00:07:22.198 10:06:28 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.198 10:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:22.198 10:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.199 10:06:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.199 ************************************ 00:07:22.199 START TEST bdev_write_zeroes 00:07:22.199 ************************************ 00:07:22.199 10:06:28 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.199 [2024-12-06 10:06:28.228461] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:22.199 [2024-12-06 10:06:28.228582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62285 ] 00:07:22.455 [2024-12-06 10:06:28.389961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.455 [2024-12-06 10:06:28.491190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.019 Running I/O for 1 seconds... 00:07:24.896 45.00 IOPS, 0.18 MiB/s 00:07:24.896 Latency(us) 00:07:24.896 [2024-12-06T10:06:31.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.896 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme0n1 : 1.85 93.28 0.36 0.00 0.00 1326119.21 10737.82 1858399.31 00:07:24.896 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme1n1p1 : 1.19 107.45 0.42 0.00 0.00 1190537.06 1187310.67 1193763.45 00:07:24.896 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme1n1p2 : 1.82 70.29 0.27 0.00 0.00 1813229.88 1806777.11 1819682.66 00:07:24.896 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme2n1 : 1.82 70.25 0.27 0.00 0.00 1813229.88 1806777.11 1819682.66 00:07:24.896 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme2n2 : 1.82 70.20 0.27 0.00 0.00 1813229.88 1806777.11 1819682.66 00:07:24.896 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme2n3 : 1.82 70.16 0.27 0.00 0.00 1813229.88 1806777.11 1819682.66 00:07:24.896 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.896 Nvme3n1 : 1.85 69.32 0.27 0.00 0.00 1839040.98 1832588.21 1845493.76 00:07:24.896 [2024-12-06T10:06:31.063Z] =================================================================================================================== 00:07:24.896 [2024-12-06T10:06:31.063Z] Total : 550.95 2.15 0.00 0.00 1642484.92 10737.82 1858399.31 00:07:25.830 00:07:25.830 real 0m3.513s 00:07:25.830 user 0m3.208s 00:07:25.830 sys 0m0.188s 00:07:25.830 10:06:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.830 ************************************ 00:07:25.830 END TEST bdev_write_zeroes 00:07:25.830 ************************************ 00:07:25.830 10:06:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.830 10:06:31 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.830 10:06:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.830 10:06:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.830 10:06:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.830 ************************************ 00:07:25.830 START TEST bdev_json_nonenclosed 00:07:25.830 ************************************ 00:07:25.830 10:06:31 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.830 [2024-12-06 10:06:31.792808] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:25.830 [2024-12-06 10:06:31.792929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62349 ] 00:07:25.830 [2024-12-06 10:06:31.951241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.088 [2024-12-06 10:06:32.052153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.088 [2024-12-06 10:06:32.052234] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:26.088 [2024-12-06 10:06:32.052251] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.088 [2024-12-06 10:06:32.052261] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.088 00:07:26.088 real 0m0.505s 00:07:26.088 user 0m0.306s 00:07:26.088 sys 0m0.094s 00:07:26.088 10:06:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.088 ************************************ 00:07:26.088 END TEST bdev_json_nonenclosed 00:07:26.088 ************************************ 00:07:26.088 10:06:32 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:26.345 10:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.345 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.345 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.345 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.345 ************************************ 00:07:26.345 START TEST bdev_json_nonarray 00:07:26.345 ************************************ 00:07:26.345 10:06:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.345 [2024-12-06 10:06:32.363527] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:26.345 [2024-12-06 10:06:32.363651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62369 ] 00:07:26.660 [2024-12-06 10:06:32.523164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.660 [2024-12-06 10:06:32.625222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.660 [2024-12-06 10:06:32.625299] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.660 [2024-12-06 10:06:32.625316] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.660 [2024-12-06 10:06:32.625325] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.660 00:07:26.660 real 0m0.510s 00:07:26.660 user 0m0.317s 00:07:26.660 sys 0m0.088s 00:07:26.660 10:06:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.660 10:06:32 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.660 ************************************ 00:07:26.660 END TEST bdev_json_nonarray 00:07:26.660 ************************************ 00:07:26.917 10:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:26.917 10:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:26.917 10:06:32 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:26.917 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.917 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.918 10:06:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.918 ************************************ 00:07:26.918 START TEST bdev_gpt_uuid 00:07:26.918 ************************************ 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62400 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62400 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62400 ']' 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.918 10:06:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.918 [2024-12-06 10:06:32.960262] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:26.918 [2024-12-06 10:06:32.960390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62400 ] 00:07:27.175 [2024-12-06 10:06:33.117818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.175 [2024-12-06 10:06:33.221085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.739 10:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.739 10:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:27.739 10:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.739 10:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.739 10:06:33 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 Some configs were skipped because the RPC state that can call them passed over. 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.996 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.253 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.253 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:28.253 { 00:07:28.253 "name": "Nvme1n1p1", 00:07:28.253 "aliases": [ 00:07:28.253 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:28.253 ], 00:07:28.253 "product_name": "GPT Disk", 00:07:28.253 "block_size": 4096, 00:07:28.253 "num_blocks": 655104, 00:07:28.253 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.254 "assigned_rate_limits": { 00:07:28.254 "rw_ios_per_sec": 0, 00:07:28.254 "rw_mbytes_per_sec": 0, 00:07:28.254 "r_mbytes_per_sec": 0, 00:07:28.254 "w_mbytes_per_sec": 0 00:07:28.254 }, 00:07:28.254 "claimed": false, 00:07:28.254 "zoned": false, 00:07:28.254 "supported_io_types": { 00:07:28.254 "read": true, 00:07:28.254 "write": true, 00:07:28.254 "unmap": true, 00:07:28.254 "flush": true, 00:07:28.254 "reset": true, 00:07:28.254 "nvme_admin": false, 00:07:28.254 "nvme_io": false, 00:07:28.254 "nvme_io_md": false, 00:07:28.254 "write_zeroes": true, 00:07:28.254 "zcopy": false, 00:07:28.254 "get_zone_info": false, 00:07:28.254 "zone_management": false, 00:07:28.254 "zone_append": false, 00:07:28.254 "compare": true, 00:07:28.254 "compare_and_write": false, 00:07:28.254 "abort": true, 00:07:28.254 "seek_hole": false, 00:07:28.254 "seek_data": false, 00:07:28.254 "copy": true, 00:07:28.254 "nvme_iov_md": false 00:07:28.254 }, 00:07:28.254 "driver_specific": { 00:07:28.254 "gpt": { 00:07:28.254 "base_bdev": "Nvme1n1", 00:07:28.254 "offset_blocks": 256, 00:07:28.254 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:28.254 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.254 "partition_name": "SPDK_TEST_first" 00:07:28.254 } 00:07:28.254 } 00:07:28.254 } 00:07:28.254 ]' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:28.254 { 00:07:28.254 "name": "Nvme1n1p2", 00:07:28.254 "aliases": [ 00:07:28.254 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:28.254 ], 00:07:28.254 "product_name": "GPT Disk", 00:07:28.254 "block_size": 4096, 00:07:28.254 "num_blocks": 655103, 00:07:28.254 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.254 "assigned_rate_limits": { 00:07:28.254 "rw_ios_per_sec": 0, 00:07:28.254 "rw_mbytes_per_sec": 0, 00:07:28.254 "r_mbytes_per_sec": 0, 00:07:28.254 "w_mbytes_per_sec": 0 00:07:28.254 }, 00:07:28.254 "claimed": false, 00:07:28.254 "zoned": false, 00:07:28.254 "supported_io_types": { 00:07:28.254 "read": true, 00:07:28.254 "write": true, 00:07:28.254 "unmap": true, 00:07:28.254 "flush": true, 00:07:28.254 "reset": true, 00:07:28.254 "nvme_admin": false, 00:07:28.254 "nvme_io": false, 00:07:28.254 "nvme_io_md": false, 00:07:28.254 "write_zeroes": true, 00:07:28.254 "zcopy": false, 00:07:28.254 "get_zone_info": false, 00:07:28.254 "zone_management": false, 00:07:28.254 "zone_append": false, 00:07:28.254 "compare": true, 00:07:28.254 "compare_and_write": false, 00:07:28.254 "abort": true, 00:07:28.254 "seek_hole": false, 00:07:28.254 "seek_data": false, 00:07:28.254 "copy": true, 00:07:28.254 "nvme_iov_md": false 00:07:28.254 }, 00:07:28.254 "driver_specific": { 00:07:28.254 "gpt": { 00:07:28.254 "base_bdev": "Nvme1n1", 00:07:28.254 "offset_blocks": 655360, 00:07:28.254 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:28.254 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.254 "partition_name": "SPDK_TEST_second" 00:07:28.254 } 00:07:28.254 } 00:07:28.254 } 00:07:28.254 ]' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62400 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62400 ']' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62400 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.254 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62400 00:07:28.512 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.512 killing process with pid 62400 00:07:28.512 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.512 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62400' 00:07:28.512 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62400 00:07:28.512 10:06:34 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62400 00:07:29.900 00:07:29.900 real 0m3.078s 00:07:29.900 user 0m3.247s 00:07:29.900 sys 0m0.367s 00:07:29.900 10:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.900 ************************************ 00:07:29.900 END TEST bdev_gpt_uuid 00:07:29.900 ************************************ 00:07:29.900 10:06:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:29.900 10:06:36 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:30.158 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.417 Waiting for block devices as requested 00:07:30.417 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.417 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.674 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.674 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.991 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:35.991 10:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:35.991 10:06:41 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:36.248 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:36.248 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:36.248 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:36.248 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:36.248 10:06:42 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:36.248 00:07:36.248 real 0m59.152s 00:07:36.248 user 1m14.678s 00:07:36.248 sys 0m8.246s 00:07:36.248 10:06:42 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.248 10:06:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.248 ************************************ 00:07:36.248 END TEST blockdev_nvme_gpt 00:07:36.248 ************************************ 00:07:36.248 10:06:42 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:36.248 10:06:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.248 10:06:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.248 10:06:42 -- common/autotest_common.sh@10 -- # set +x 00:07:36.248 ************************************ 00:07:36.248 START TEST nvme 00:07:36.248 ************************************ 00:07:36.248 10:06:42 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:36.507 * Looking for test storage... 00:07:36.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.507 10:06:42 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.507 10:06:42 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.507 10:06:42 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.507 10:06:42 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.507 10:06:42 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.507 10:06:42 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:36.507 10:06:42 nvme -- scripts/common.sh@345 -- # : 1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.507 10:06:42 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.507 10:06:42 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@353 -- # local d=1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.507 10:06:42 nvme -- scripts/common.sh@355 -- # echo 1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.507 10:06:42 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@353 -- # local d=2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.507 10:06:42 nvme -- scripts/common.sh@355 -- # echo 2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.507 10:06:42 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.507 10:06:42 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.507 10:06:42 nvme -- scripts/common.sh@368 -- # return 0 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.507 --rc genhtml_branch_coverage=1 00:07:36.507 --rc genhtml_function_coverage=1 00:07:36.507 --rc genhtml_legend=1 00:07:36.507 --rc geninfo_all_blocks=1 00:07:36.507 --rc geninfo_unexecuted_blocks=1 00:07:36.507 00:07:36.507 ' 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.507 --rc genhtml_branch_coverage=1 00:07:36.507 --rc genhtml_function_coverage=1 00:07:36.507 --rc genhtml_legend=1 00:07:36.507 --rc geninfo_all_blocks=1 00:07:36.507 --rc geninfo_unexecuted_blocks=1 00:07:36.507 00:07:36.507 ' 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.507 --rc genhtml_branch_coverage=1 00:07:36.507 --rc genhtml_function_coverage=1 00:07:36.507 --rc genhtml_legend=1 00:07:36.507 --rc geninfo_all_blocks=1 00:07:36.507 --rc geninfo_unexecuted_blocks=1 00:07:36.507 00:07:36.507 ' 00:07:36.507 10:06:42 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.507 --rc genhtml_branch_coverage=1 00:07:36.507 --rc genhtml_function_coverage=1 00:07:36.507 --rc genhtml_legend=1 00:07:36.507 --rc geninfo_all_blocks=1 00:07:36.507 --rc geninfo_unexecuted_blocks=1 00:07:36.507 00:07:36.507 ' 00:07:36.507 10:06:42 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:37.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.639 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.639 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.639 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.639 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.639 10:06:43 nvme -- nvme/nvme.sh@79 -- # uname 00:07:37.639 10:06:43 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:37.639 10:06:43 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:37.639 10:06:43 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:37.639 Waiting for stub to ready for secondary processes... 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1075 -- # stubpid=63034 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63034 ]] 00:07:37.639 10:06:43 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:37.639 [2024-12-06 10:06:43.694842] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:07:37.639 [2024-12-06 10:06:43.694970] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:38.573 [2024-12-06 10:06:44.452743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.573 [2024-12-06 10:06:44.550972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.573 [2024-12-06 10:06:44.551359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:38.573 [2024-12-06 10:06:44.551482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.573 [2024-12-06 10:06:44.565168] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:38.573 [2024-12-06 10:06:44.565206] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.573 [2024-12-06 10:06:44.580347] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:38.573 [2024-12-06 10:06:44.580589] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:38.573 [2024-12-06 10:06:44.585278] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.573 [2024-12-06 10:06:44.585964] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:38.573 [2024-12-06 10:06:44.586078] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:38.573 [2024-12-06 10:06:44.590416] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.573 [2024-12-06 10:06:44.590862] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:38.573 [2024-12-06 10:06:44.590909] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:38.573 [2024-12-06 10:06:44.593200] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:38.573 [2024-12-06 10:06:44.593352] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:38.573 [2024-12-06 10:06:44.593396] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:38.573 [2024-12-06 10:06:44.593428] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:38.573 [2024-12-06 10:06:44.593476] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:38.573 done. 00:07:38.573 10:06:44 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:38.573 10:06:44 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:38.573 10:06:44 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:38.573 10:06:44 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:38.573 10:06:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.573 10:06:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.573 ************************************ 00:07:38.573 START TEST nvme_reset 00:07:38.573 ************************************ 00:07:38.573 10:06:44 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:38.831 Initializing NVMe Controllers 00:07:38.831 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:38.831 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:38.831 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:38.831 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:38.831 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:38.831 00:07:38.831 real 0m0.219s 00:07:38.831 user 0m0.074s 00:07:38.831 sys 0m0.098s 00:07:38.831 ************************************ 00:07:38.831 10:06:44 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.831 10:06:44 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:38.831 END TEST nvme_reset 00:07:38.831 ************************************ 00:07:38.831 10:06:44 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:38.831 10:06:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.831 10:06:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.831 10:06:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.831 ************************************ 00:07:38.831 START TEST nvme_identify 00:07:38.831 ************************************ 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:38.831 10:06:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:38.831 10:06:44 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:38.831 10:06:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.831 10:06:44 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.831 10:06:44 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:39.102 10:06:45 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:39.103 10:06:45 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:39.103 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:39.103 [2024-12-06 10:06:45.207538] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63055 terminated unexpected 00:07:39.103 ===================================================== 00:07:39.103 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.103 ===================================================== 00:07:39.103 Controller Capabilities/Features 00:07:39.103 ================================ 00:07:39.103 Vendor ID: 1b36 00:07:39.103 Subsystem Vendor ID: 1af4 00:07:39.103 Serial Number: 12340 00:07:39.103 Model Number: QEMU NVMe Ctrl 00:07:39.103 Firmware Version: 8.0.0 00:07:39.103 Recommended Arb Burst: 6 00:07:39.103 IEEE OUI Identifier: 00 54 52 00:07:39.103 Multi-path I/O 00:07:39.103 May have multiple subsystem ports: No 00:07:39.103 May have multiple controllers: No 00:07:39.103 Associated with SR-IOV VF: No 00:07:39.103 Max Data Transfer Size: 524288 00:07:39.103 Max Number of Namespaces: 256 00:07:39.103 Max Number of I/O Queues: 64 00:07:39.103 NVMe Specification Version (VS): 1.4 00:07:39.103 NVMe Specification Version (Identify): 1.4 00:07:39.103 Maximum Queue Entries: 2048 00:07:39.103 Contiguous Queues Required: Yes 00:07:39.103 Arbitration Mechanisms Supported 00:07:39.103 Weighted Round Robin: Not Supported 00:07:39.103 Vendor Specific: Not Supported 00:07:39.103 Reset Timeout: 7500 ms 00:07:39.103 Doorbell Stride: 4 bytes 00:07:39.104 NVM Subsystem Reset: Not Supported 00:07:39.104 Command Sets Supported 00:07:39.104 NVM Command Set: Supported 00:07:39.104 Boot Partition: Not Supported 00:07:39.104 Memory Page Size Minimum: 4096 bytes 00:07:39.104 Memory Page Size Maximum: 65536 bytes 00:07:39.104 Persistent Memory Region: Not Supported 00:07:39.104 Optional Asynchronous Events Supported 00:07:39.104 Namespace Attribute Notices: Supported 00:07:39.104 Firmware Activation Notices: Not Supported 00:07:39.104 ANA Change Notices: Not Supported 00:07:39.104 PLE Aggregate Log Change Notices: Not Supported 00:07:39.104 LBA Status Info Alert Notices: Not Supported 00:07:39.104 EGE Aggregate Log Change Notices: Not Supported 00:07:39.104 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.104 Zone Descriptor Change Notices: Not Supported 00:07:39.104 Discovery Log Change Notices: Not Supported 00:07:39.104 Controller Attributes 00:07:39.104 128-bit Host Identifier: Not Supported 00:07:39.104 Non-Operational Permissive Mode: Not Supported 00:07:39.104 NVM Sets: Not Supported 00:07:39.104 Read Recovery Levels: Not Supported 00:07:39.104 Endurance Groups: Not Supported 00:07:39.104 Predictable Latency Mode: Not Supported 00:07:39.104 Traffic Based Keep ALive: Not Supported 00:07:39.104 Namespace Granularity: Not Supported 00:07:39.104 SQ Associations: Not Supported 00:07:39.104 UUID List: Not Supported 00:07:39.104 Multi-Domain Subsystem: Not Supported 00:07:39.104 Fixed Capacity Management: Not Supported 00:07:39.104 Variable Capacity Management: Not Supported 00:07:39.104 Delete Endurance Group: Not Supported 00:07:39.104 Delete NVM Set: Not Supported 00:07:39.104 Extended LBA Formats Supported: Supported 00:07:39.104 Flexible Data Placement Supported: Not Supported 00:07:39.104 00:07:39.104 Controller Memory Buffer Support 00:07:39.104 ================================ 00:07:39.104 Supported: No 00:07:39.104 00:07:39.104 Persistent Memory Region Support 00:07:39.104 ================================ 00:07:39.104 Supported: No 00:07:39.104 00:07:39.104 Admin Command Set Attributes 00:07:39.104 ============================ 00:07:39.104 Security Send/Receive: Not Supported 00:07:39.105 Format NVM: Supported 00:07:39.105 Firmware Activate/Download: Not Supported 00:07:39.105 Namespace Management: Supported 00:07:39.105 Device Self-Test: Not Supported 00:07:39.105 Directives: Supported 00:07:39.105 NVMe-MI: Not Supported 00:07:39.105 Virtualization Management: Not Supported 00:07:39.105 Doorbell Buffer Config: Supported 00:07:39.105 Get LBA Status Capability: Not Supported 00:07:39.105 Command & Feature Lockdown Capability: Not Supported 00:07:39.105 Abort Command Limit: 4 00:07:39.105 Async Event Request Limit: 4 00:07:39.105 Number of Firmware Slots: N/A 00:07:39.105 Firmware Slot 1 Read-Only: N/A 00:07:39.105 Firmware Activation Without Reset: N/A 00:07:39.105 Multiple Update Detection Support: N/A 00:07:39.105 Firmware Update Granularity: No Information Provided 00:07:39.105 Per-Namespace SMART Log: Yes 00:07:39.105 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.105 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:39.105 Command Effects Log Page: Supported 00:07:39.105 Get Log Page Extended Data: Supported 00:07:39.105 Telemetry Log Pages: Not Supported 00:07:39.105 Persistent Event Log Pages: Not Supported 00:07:39.105 Supported Log Pages Log Page: May Support 00:07:39.105 Commands Supported & Effects Log Page: Not Supported 00:07:39.105 Feature Identifiers & Effects Log Page:May Support 00:07:39.106 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.106 Data Area 4 for Telemetry Log: Not Supported 00:07:39.106 Error Log Page Entries Supported: 1 00:07:39.106 Keep Alive: Not Supported 00:07:39.106 00:07:39.106 NVM Command Set Attributes 00:07:39.106 ========================== 00:07:39.106 Submission Queue Entry Size 00:07:39.106 Max: 64 00:07:39.106 Min: 64 00:07:39.106 Completion Queue Entry Size 00:07:39.106 Max: 16 00:07:39.112 Min: 16 00:07:39.112 Number of Namespaces: 256 00:07:39.112 Compare Command: Supported 00:07:39.112 Write Uncorrectable Command: Not Supported 00:07:39.112 Dataset Management Command: Supported 00:07:39.112 Write Zeroes Command: Supported 00:07:39.112 Set Features Save Field: Supported 00:07:39.112 Reservations: Not Supported 00:07:39.112 Timestamp: Supported 00:07:39.112 Copy: Supported 00:07:39.112 Volatile Write Cache: Present 00:07:39.112 Atomic Write Unit (Normal): 1 00:07:39.112 Atomic Write Unit (PFail): 1 00:07:39.112 Atomic Compare & Write Unit: 1 00:07:39.112 Fused Compare & Write: Not Supported 00:07:39.112 Scatter-Gather List 00:07:39.112 SGL Command Set: Supported 00:07:39.112 SGL Keyed: Not Supported 00:07:39.112 SGL Bit Bucket Descriptor: Not Supported 00:07:39.112 SGL Metadata Pointer: Not Supported 00:07:39.112 Oversized SGL: Not Supported 00:07:39.112 SGL Metadata Address: Not Supported 00:07:39.113 SGL Offset: Not Supported 00:07:39.113 Transport SGL Data Block: Not Supported 00:07:39.113 Replay Protected Memory Block: Not Supported 00:07:39.113 00:07:39.113 Firmware Slot Information 00:07:39.113 ========================= 00:07:39.113 Active slot: 1 00:07:39.113 Slot 1 Firmware Revision: 1.0 00:07:39.113 00:07:39.113 00:07:39.113 Commands Supported and Effects 00:07:39.113 ============================== 00:07:39.113 Admin Commands 00:07:39.113 -------------- 00:07:39.113 Delete I/O Submission Queue (00h): Supported 00:07:39.113 Create I/O Submission Queue (01h): Supported 00:07:39.113 Get Log Page (02h): Supported 00:07:39.113 Delete I/O Completion Queue (04h): Supported 00:07:39.113 Create I/O Completion Queue (05h): Supported 00:07:39.113 Identify (06h): Supported 00:07:39.113 Abort (08h): Supported 00:07:39.113 Set Features (09h): Supported 00:07:39.113 Get Features (0Ah): Supported 00:07:39.113 Asynchronous Event Request (0Ch): Supported 00:07:39.113 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.113 Directive Send (19h): Supported 00:07:39.113 Directive Receive (1Ah): Supported 00:07:39.113 Virtualization Management (1Ch): Supported 00:07:39.113 Doorbell Buffer Config (7Ch): Supported 00:07:39.113 Format NVM (80h): Supported LBA-Change 00:07:39.113 I/O Commands 00:07:39.113 ------------ 00:07:39.113 Flush (00h): Supported LBA-Change 00:07:39.113 Write (01h): Supported LBA-Change 00:07:39.113 Read (02h): Supported 00:07:39.113 Compare (05h): Supported 00:07:39.113 Write Zeroes (08h): Supported LBA-Change 00:07:39.113 Dataset Management (09h): Supported LBA-Change 00:07:39.113 Unknown (0Ch): Supported 00:07:39.113 Unknown (12h): Supported 00:07:39.113 Copy (19h): Supported LBA-Change 00:07:39.113 Unknown (1Dh): Supported LBA-Change 00:07:39.113 00:07:39.113 Error Log 00:07:39.113 ========= 00:07:39.113 00:07:39.113 Arbitration 00:07:39.113 =========== 00:07:39.113 Arbitration Burst: no limit 00:07:39.113 00:07:39.113 Power Management 00:07:39.113 ================ 00:07:39.113 Number of Power States: 1 00:07:39.113 Current Power State: Power State #0 00:07:39.113 Power State #0: 00:07:39.113 Max Power: 25.00 W 00:07:39.113 Non-Operational State: Operational 00:07:39.113 Entry Latency: 16 microseconds 00:07:39.113 Exit Latency: 4 microseconds 00:07:39.113 Relative Read Throughput: 0 00:07:39.113 Relative Read Latency: 0 00:07:39.113 Relative Write Throughput: 0 00:07:39.113 Relative Write Latency: 0 00:07:39.113 Idle Power[2024-12-06 10:06:45.209552] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63055 terminated unexpected 00:07:39.113 : Not Reported 00:07:39.113 Active Power: Not Reported 00:07:39.113 Non-Operational Permissive Mode: Not Supported 00:07:39.113 00:07:39.113 Health Information 00:07:39.113 ================== 00:07:39.113 Critical Warnings: 00:07:39.113 Available Spare Space: OK 00:07:39.113 Temperature: OK 00:07:39.113 Device Reliability: OK 00:07:39.113 Read Only: No 00:07:39.113 Volatile Memory Backup: OK 00:07:39.113 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.113 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.113 Available Spare: 0% 00:07:39.113 Available Spare Threshold: 0% 00:07:39.113 Life Percentage Used: 0% 00:07:39.113 Data Units Read: 609 00:07:39.113 Data Units Written: 537 00:07:39.113 Host Read Commands: 32133 00:07:39.113 Host Write Commands: 31919 00:07:39.113 Controller Busy Time: 0 minutes 00:07:39.113 Power Cycles: 0 00:07:39.113 Power On Hours: 0 hours 00:07:39.113 Unsafe Shutdowns: 0 00:07:39.113 Unrecoverable Media Errors: 0 00:07:39.113 Lifetime Error Log Entries: 0 00:07:39.113 Warning Temperature Time: 0 minutes 00:07:39.113 Critical Temperature Time: 0 minutes 00:07:39.113 00:07:39.113 Number of Queues 00:07:39.113 ================ 00:07:39.113 Number of I/O Submission Queues: 64 00:07:39.113 Number of I/O Completion Queues: 64 00:07:39.113 00:07:39.113 ZNS Specific Controller Data 00:07:39.113 ============================ 00:07:39.113 Zone Append Size Limit: 0 00:07:39.113 00:07:39.113 00:07:39.113 Active Namespaces 00:07:39.113 ================= 00:07:39.113 Namespace ID:1 00:07:39.113 Error Recovery Timeout: Unlimited 00:07:39.113 Command Set Identifier: NVM (00h) 00:07:39.113 Deallocate: Supported 00:07:39.113 Deallocated/Unwritten Error: Supported 00:07:39.113 Deallocated Read Value: All 0x00 00:07:39.113 Deallocate in Write Zeroes: Not Supported 00:07:39.113 Deallocated Guard Field: 0xFFFF 00:07:39.113 Flush: Supported 00:07:39.113 Reservation: Not Supported 00:07:39.113 Metadata Transferred as: Separate Metadata Buffer 00:07:39.113 Namespace Sharing Capabilities: Private 00:07:39.113 Size (in LBAs): 1548666 (5GiB) 00:07:39.113 Capacity (in LBAs): 1548666 (5GiB) 00:07:39.113 Utilization (in LBAs): 1548666 (5GiB) 00:07:39.113 Thin Provisioning: Not Supported 00:07:39.113 Per-NS Atomic Units: No 00:07:39.113 Maximum Single Source Range Length: 128 00:07:39.113 Maximum Copy Length: 128 00:07:39.113 Maximum Source Range Count: 128 00:07:39.113 NGUID/EUI64 Never Reused: No 00:07:39.113 Namespace Write Protected: No 00:07:39.113 Number of LBA Formats: 8 00:07:39.113 Current LBA Format: LBA Format #07 00:07:39.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.113 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.113 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.113 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.113 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.113 00:07:39.113 NVM Specific Namespace Data 00:07:39.113 =========================== 00:07:39.113 Logical Block Storage Tag Mask: 0 00:07:39.113 Protection Information Capabilities: 00:07:39.113 16b Guard Protection Information Storage Tag Support: No 00:07:39.113 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.113 Storage Tag Check Read Support: No 00:07:39.113 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.113 ===================================================== 00:07:39.113 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.113 ===================================================== 00:07:39.113 Controller Capabilities/Features 00:07:39.113 ================================ 00:07:39.113 Vendor ID: 1b36 00:07:39.113 Subsystem Vendor ID: 1af4 00:07:39.113 Serial Number: 12341 00:07:39.113 Model Number: QEMU NVMe Ctrl 00:07:39.113 Firmware Version: 8.0.0 00:07:39.113 Recommended Arb Burst: 6 00:07:39.113 IEEE OUI Identifier: 00 54 52 00:07:39.113 Multi-path I/O 00:07:39.113 May have multiple subsystem ports: No 00:07:39.113 May have multiple controllers: No 00:07:39.113 Associated with SR-IOV VF: No 00:07:39.113 Max Data Transfer Size: 524288 00:07:39.113 Max Number of Namespaces: 256 00:07:39.113 Max Number of I/O Queues: 64 00:07:39.113 NVMe Specification Version (VS): 1.4 00:07:39.113 NVMe Specification Version (Identify): 1.4 00:07:39.113 Maximum Queue Entries: 2048 00:07:39.113 Contiguous Queues Required: Yes 00:07:39.113 Arbitration Mechanisms Supported 00:07:39.113 Weighted Round Robin: Not Supported 00:07:39.113 Vendor Specific: Not Supported 00:07:39.113 Reset Timeout: 7500 ms 00:07:39.113 Doorbell Stride: 4 bytes 00:07:39.113 NVM Subsystem Reset: Not Supported 00:07:39.113 Command Sets Supported 00:07:39.113 NVM Command Set: Supported 00:07:39.113 Boot Partition: Not Supported 00:07:39.113 Memory Page Size Minimum: 4096 bytes 00:07:39.113 Memory Page Size Maximum: 65536 bytes 00:07:39.113 Persistent Memory Region: Not Supported 00:07:39.113 Optional Asynchronous Events Supported 00:07:39.113 Namespace Attribute Notices: Supported 00:07:39.113 Firmware Activation Notices: Not Supported 00:07:39.113 ANA Change Notices: Not Supported 00:07:39.113 PLE Aggregate Log Change Notices: Not Supported 00:07:39.113 LBA Status Info Alert Notices: Not Supported 00:07:39.113 EGE Aggregate Log Change Notices: Not Supported 00:07:39.113 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.113 Zone Descriptor Change Notices: Not Supported 00:07:39.113 Discovery Log Change Notices: Not Supported 00:07:39.113 Controller Attributes 00:07:39.113 128-bit Host Identifier: Not Supported 00:07:39.113 Non-Operational Permissive Mode: Not Supported 00:07:39.113 NVM Sets: Not Supported 00:07:39.113 Read Recovery Levels: Not Supported 00:07:39.113 Endurance Groups: Not Supported 00:07:39.113 Predictable Latency Mode: Not Supported 00:07:39.113 Traffic Based Keep ALive: Not Supported 00:07:39.113 Namespace Granularity: Not Supported 00:07:39.113 SQ Associations: Not Supported 00:07:39.113 UUID List: Not Supported 00:07:39.113 Multi-Domain Subsystem: Not Supported 00:07:39.113 Fixed Capacity Management: Not Supported 00:07:39.113 Variable Capacity Management: Not Supported 00:07:39.113 Delete Endurance Group: Not Supported 00:07:39.113 Delete NVM Set: Not Supported 00:07:39.113 Extended LBA Formats Supported: Supported 00:07:39.113 Flexible Data Placement Supported: Not Supported 00:07:39.113 00:07:39.113 Controller Memory Buffer Support 00:07:39.113 ================================ 00:07:39.113 Supported: No 00:07:39.113 00:07:39.113 Persistent Memory Region Support 00:07:39.113 ================================ 00:07:39.113 Supported: No 00:07:39.113 00:07:39.113 Admin Command Set Attributes 00:07:39.113 ============================ 00:07:39.113 Security Send/Receive: Not Supported 00:07:39.113 Format NVM: Supported 00:07:39.113 Firmware Activate/Download: Not Supported 00:07:39.113 Namespace Management: Supported 00:07:39.113 Device Self-Test: Not Supported 00:07:39.113 Directives: Supported 00:07:39.113 NVMe-MI: Not Supported 00:07:39.113 Virtualization Management: Not Supported 00:07:39.113 Doorbell Buffer Config: Supported 00:07:39.113 Get LBA Status Capability: Not Supported 00:07:39.113 Command & Feature Lockdown Capability: Not Supported 00:07:39.113 Abort Command Limit: 4 00:07:39.113 Async Event Request Limit: 4 00:07:39.113 Number of Firmware Slots: N/A 00:07:39.113 Firmware Slot 1 Read-Only: N/A 00:07:39.113 Firmware Activation Without Reset: N/A 00:07:39.113 Multiple Update Detection Support: N/A 00:07:39.113 Firmware Update Granularity: No Information Provided 00:07:39.113 Per-Namespace SMART Log: Yes 00:07:39.113 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.113 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:39.113 Command Effects Log Page: Supported 00:07:39.113 Get Log Page Extended Data: Supported 00:07:39.113 Telemetry Log Pages: Not Supported 00:07:39.113 Persistent Event Log Pages: Not Supported 00:07:39.113 Supported Log Pages Log Page: May Support 00:07:39.113 Commands Supported & Effects Log Page: Not Supported 00:07:39.113 Feature Identifiers & Effects Log Page:May Support 00:07:39.113 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.113 Data Area 4 for Telemetry Log: Not Supported 00:07:39.113 Error Log Page Entries Supported: 1 00:07:39.113 Keep Alive: Not Supported 00:07:39.113 00:07:39.113 NVM Command Set Attributes 00:07:39.113 ========================== 00:07:39.113 Submission Queue Entry Size 00:07:39.113 Max: 64 00:07:39.113 Min: 64 00:07:39.113 Completion Queue Entry Size 00:07:39.113 Max: 16 00:07:39.113 Min: 16 00:07:39.113 Number of Namespaces: 256 00:07:39.113 Compare Command: Supported 00:07:39.113 Write Uncorrectable Command: Not Supported 00:07:39.113 Dataset Management Command: Supported 00:07:39.113 Write Zeroes Command: Supported 00:07:39.113 Set Features Save Field: Supported 00:07:39.113 Reservations: Not Supported 00:07:39.113 Timestamp: Supported 00:07:39.113 Copy: Supported 00:07:39.113 Volatile Write Cache: Present 00:07:39.113 Atomic Write Unit (Normal): 1 00:07:39.113 Atomic Write Unit (PFail): 1 00:07:39.113 Atomic Compare & Write Unit: 1 00:07:39.113 Fused Compare & Write: Not Supported 00:07:39.113 Scatter-Gather List 00:07:39.113 SGL Command Set: Supported 00:07:39.113 SGL Keyed: Not Supported 00:07:39.113 SGL Bit Bucket Descriptor: Not Supported 00:07:39.113 SGL Metadata Pointer: Not Supported 00:07:39.113 Oversized SGL: Not Supported 00:07:39.113 SGL Metadata Address: Not Supported 00:07:39.113 SGL Offset: Not Supported 00:07:39.113 Transport SGL Data Block: Not Supported 00:07:39.113 Replay Protected Memory Block: Not Supported 00:07:39.113 00:07:39.113 Firmware Slot Information 00:07:39.113 ========================= 00:07:39.113 Active slot: 1 00:07:39.113 Slot 1 Firmware Revision: 1.0 00:07:39.113 00:07:39.113 00:07:39.113 Commands Supported and Effects 00:07:39.113 ============================== 00:07:39.113 Admin Commands 00:07:39.113 -------------- 00:07:39.113 Delete I/O Submission Queue (00h): Supported 00:07:39.113 Create I/O Submission Queue (01h): Supported 00:07:39.113 Get Log Page (02h): Supported 00:07:39.113 Delete I/O Completion Queue (04h): Supported 00:07:39.113 Create I/O Completion Queue (05h): Supported 00:07:39.113 Identify (06h): Supported 00:07:39.114 Abort (08h): Supported 00:07:39.114 Set Features (09h): Supported 00:07:39.114 Get Features (0Ah): Supported 00:07:39.114 Asynchronous Event Request (0Ch): Supported 00:07:39.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.114 Directive Send (19h): Supported 00:07:39.114 Directive Receive (1Ah): Supported 00:07:39.114 Virtualization Management (1Ch): Supported 00:07:39.114 Doorbell Buffer Config (7Ch): Supported 00:07:39.114 Format NVM (80h): Supported LBA-Change 00:07:39.114 I/O Commands 00:07:39.114 ------------ 00:07:39.114 Flush (00h): Supported LBA-Change 00:07:39.114 Write (01h): Supported LBA-Change 00:07:39.114 Read (02h): Supported 00:07:39.114 Compare (05h): Supported 00:07:39.114 Write Zeroes (08h): Supported LBA-Change 00:07:39.114 Dataset Management (09h): Supported LBA-Change 00:07:39.114 Unknown (0Ch): Supported 00:07:39.114 Unknown (12h): Supported 00:07:39.114 Copy (19h): Supported LBA-Change 00:07:39.114 Unknown (1Dh): Supported LBA-Change 00:07:39.114 00:07:39.114 Error Log 00:07:39.114 ========= 00:07:39.114 00:07:39.114 Arbitration 00:07:39.114 =========== 00:07:39.114 Arbitration Burst: no limit 00:07:39.114 00:07:39.114 Power Management 00:07:39.114 ================ 00:07:39.114 Number of Power States: 1 00:07:39.114 Current Power State: Power State #0 00:07:39.114 Power State #0: 00:07:39.114 Max Power: 25.00 W 00:07:39.114 Non-Operational State: Operational 00:07:39.114 Entry Latency: 16 microseconds 00:07:39.114 Exit Latency: 4 microseconds 00:07:39.114 Relative Read Throughput: 0 00:07:39.114 Relative Read Latency: 0 00:07:39.114 Relative Write Throughput: 0 00:07:39.114 Relative Write Latency: 0 00:07:39.114 Idle Power: Not Reported 00:07:39.114 Active Power: Not Reported 00:07:39.114 Non-Operational Permissive Mode: Not Supported 00:07:39.114 00:07:39.114 Health Information 00:07:39.114 ================== 00:07:39.114 Critical Warnings: 00:07:39.114 Available Spare Space: OK 00:07:39.114 Temperature: [2024-12-06 10:06:45.210970] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63055 terminated unexpected 00:07:39.114 OK 00:07:39.114 Device Reliability: OK 00:07:39.114 Read Only: No 00:07:39.114 Volatile Memory Backup: OK 00:07:39.114 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.114 Available Spare: 0% 00:07:39.114 Available Spare Threshold: 0% 00:07:39.114 Life Percentage Used: 0% 00:07:39.114 Data Units Read: 930 00:07:39.114 Data Units Written: 803 00:07:39.114 Host Read Commands: 48009 00:07:39.114 Host Write Commands: 46902 00:07:39.114 Controller Busy Time: 0 minutes 00:07:39.114 Power Cycles: 0 00:07:39.114 Power On Hours: 0 hours 00:07:39.114 Unsafe Shutdowns: 0 00:07:39.114 Unrecoverable Media Errors: 0 00:07:39.114 Lifetime Error Log Entries: 0 00:07:39.114 Warning Temperature Time: 0 minutes 00:07:39.114 Critical Temperature Time: 0 minutes 00:07:39.114 00:07:39.114 Number of Queues 00:07:39.114 ================ 00:07:39.114 Number of I/O Submission Queues: 64 00:07:39.114 Number of I/O Completion Queues: 64 00:07:39.114 00:07:39.114 ZNS Specific Controller Data 00:07:39.114 ============================ 00:07:39.114 Zone Append Size Limit: 0 00:07:39.114 00:07:39.114 00:07:39.114 Active Namespaces 00:07:39.114 ================= 00:07:39.114 Namespace ID:1 00:07:39.114 Error Recovery Timeout: Unlimited 00:07:39.114 Command Set Identifier: NVM (00h) 00:07:39.114 Deallocate: Supported 00:07:39.114 Deallocated/Unwritten Error: Supported 00:07:39.114 Deallocated Read Value: All 0x00 00:07:39.114 Deallocate in Write Zeroes: Not Supported 00:07:39.114 Deallocated Guard Field: 0xFFFF 00:07:39.114 Flush: Supported 00:07:39.114 Reservation: Not Supported 00:07:39.114 Namespace Sharing Capabilities: Private 00:07:39.114 Size (in LBAs): 1310720 (5GiB) 00:07:39.114 Capacity (in LBAs): 1310720 (5GiB) 00:07:39.114 Utilization (in LBAs): 1310720 (5GiB) 00:07:39.114 Thin Provisioning: Not Supported 00:07:39.114 Per-NS Atomic Units: No 00:07:39.114 Maximum Single Source Range Length: 128 00:07:39.114 Maximum Copy Length: 128 00:07:39.114 Maximum Source Range Count: 128 00:07:39.114 NGUID/EUI64 Never Reused: No 00:07:39.114 Namespace Write Protected: No 00:07:39.114 Number of LBA Formats: 8 00:07:39.114 Current LBA Format: LBA Format #04 00:07:39.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.114 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.114 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.114 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.114 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.114 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.114 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.114 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.114 00:07:39.114 NVM Specific Namespace Data 00:07:39.114 =========================== 00:07:39.114 Logical Block Storage Tag Mask: 0 00:07:39.114 Protection Information Capabilities: 00:07:39.114 16b Guard Protection Information Storage Tag Support: No 00:07:39.114 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.114 Storage Tag Check Read Support: No 00:07:39.114 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.114 ===================================================== 00:07:39.114 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.114 ===================================================== 00:07:39.114 Controller Capabilities/Features 00:07:39.114 ================================ 00:07:39.114 Vendor ID: 1b36 00:07:39.114 Subsystem Vendor ID: 1af4 00:07:39.114 Serial Number: 12343 00:07:39.114 Model Number: QEMU NVMe Ctrl 00:07:39.114 Firmware Version: 8.0.0 00:07:39.114 Recommended Arb Burst: 6 00:07:39.114 IEEE OUI Identifier: 00 54 52 00:07:39.114 Multi-path I/O 00:07:39.114 May have multiple subsystem ports: No 00:07:39.114 May have multiple controllers: Yes 00:07:39.114 Associated with SR-IOV VF: No 00:07:39.114 Max Data Transfer Size: 524288 00:07:39.114 Max Number of Namespaces: 256 00:07:39.114 Max Number of I/O Queues: 64 00:07:39.114 NVMe Specification Version (VS): 1.4 00:07:39.114 NVMe Specification Version (Identify): 1.4 00:07:39.114 Maximum Queue Entries: 2048 00:07:39.114 Contiguous Queues Required: Yes 00:07:39.114 Arbitration Mechanisms Supported 00:07:39.114 Weighted Round Robin: Not Supported 00:07:39.114 Vendor Specific: Not Supported 00:07:39.114 Reset Timeout: 7500 ms 00:07:39.114 Doorbell Stride: 4 bytes 00:07:39.114 NVM Subsystem Reset: Not Supported 00:07:39.114 Command Sets Supported 00:07:39.114 NVM Command Set: Supported 00:07:39.114 Boot Partition: Not Supported 00:07:39.114 Memory Page Size Minimum: 4096 bytes 00:07:39.114 Memory Page Size Maximum: 65536 bytes 00:07:39.114 Persistent Memory Region: Not Supported 00:07:39.114 Optional Asynchronous Events Supported 00:07:39.114 Namespace Attribute Notices: Supported 00:07:39.114 Firmware Activation Notices: Not Supported 00:07:39.114 ANA Change Notices: Not Supported 00:07:39.114 PLE Aggregate Log Change Notices: Not Supported 00:07:39.114 LBA Status Info Alert Notices: Not Supported 00:07:39.114 EGE Aggregate Log Change Notices: Not Supported 00:07:39.114 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.114 Zone Descriptor Change Notices: Not Supported 00:07:39.114 Discovery Log Change Notices: Not Supported 00:07:39.114 Controller Attributes 00:07:39.114 128-bit Host Identifier: Not Supported 00:07:39.114 Non-Operational Permissive Mode: Not Supported 00:07:39.114 NVM Sets: Not Supported 00:07:39.114 Read Recovery Levels: Not Supported 00:07:39.114 Endurance Groups: Supported 00:07:39.114 Predictable Latency Mode: Not Supported 00:07:39.114 Traffic Based Keep ALive: Not Supported 00:07:39.114 Namespace Granularity: Not Supported 00:07:39.114 SQ Associations: Not Supported 00:07:39.114 UUID List: Not Supported 00:07:39.114 Multi-Domain Subsystem: Not Supported 00:07:39.114 Fixed Capacity Management: Not Supported 00:07:39.114 Variable Capacity Management: Not Supported 00:07:39.114 Delete Endurance Group: Not Supported 00:07:39.114 Delete NVM Set: Not Supported 00:07:39.114 Extended LBA Formats Supported: Supported 00:07:39.114 Flexible Data Placement Supported: Supported 00:07:39.114 00:07:39.114 Controller Memory Buffer Support 00:07:39.114 ================================ 00:07:39.114 Supported: No 00:07:39.114 00:07:39.114 Persistent Memory Region Support 00:07:39.114 ================================ 00:07:39.114 Supported: No 00:07:39.114 00:07:39.114 Admin Command Set Attributes 00:07:39.114 ============================ 00:07:39.114 Security Send/Receive: Not Supported 00:07:39.114 Format NVM: Supported 00:07:39.114 Firmware Activate/Download: Not Supported 00:07:39.114 Namespace Management: Supported 00:07:39.114 Device Self-Test: Not Supported 00:07:39.114 Directives: Supported 00:07:39.114 NVMe-MI: Not Supported 00:07:39.114 Virtualization Management: Not Supported 00:07:39.114 Doorbell Buffer Config: Supported 00:07:39.114 Get LBA Status Capability: Not Supported 00:07:39.114 Command & Feature Lockdown Capability: Not Supported 00:07:39.114 Abort Command Limit: 4 00:07:39.114 Async Event Request Limit: 4 00:07:39.114 Number of Firmware Slots: N/A 00:07:39.114 Firmware Slot 1 Read-Only: N/A 00:07:39.114 Firmware Activation Without Reset: N/A 00:07:39.114 Multiple Update Detection Support: N/A 00:07:39.114 Firmware Update Granularity: No Information Provided 00:07:39.114 Per-Namespace SMART Log: Yes 00:07:39.114 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.114 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:39.114 Command Effects Log Page: Supported 00:07:39.114 Get Log Page Extended Data: Supported 00:07:39.114 Telemetry Log Pages: Not Supported 00:07:39.114 Persistent Event Log Pages: Not Supported 00:07:39.114 Supported Log Pages Log Page: May Support 00:07:39.114 Commands Supported & Effects Log Page: Not Supported 00:07:39.114 Feature Identifiers & Effects Log Page:May Support 00:07:39.114 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.114 Data Area 4 for Telemetry Log: Not Supported 00:07:39.114 Error Log Page Entries Supported: 1 00:07:39.114 Keep Alive: Not Supported 00:07:39.114 00:07:39.114 NVM Command Set Attributes 00:07:39.114 ========================== 00:07:39.114 Submission Queue Entry Size 00:07:39.114 Max: 64 00:07:39.114 Min: 64 00:07:39.114 Completion Queue Entry Size 00:07:39.114 Max: 16 00:07:39.114 Min: 16 00:07:39.114 Number of Namespaces: 256 00:07:39.114 Compare Command: Supported 00:07:39.114 Write Uncorrectable Command: Not Supported 00:07:39.114 Dataset Management Command: Supported 00:07:39.114 Write Zeroes Command: Supported 00:07:39.114 Set Features Save Field: Supported 00:07:39.114 Reservations: Not Supported 00:07:39.114 Timestamp: Supported 00:07:39.114 Copy: Supported 00:07:39.114 Volatile Write Cache: Present 00:07:39.114 Atomic Write Unit (Normal): 1 00:07:39.114 Atomic Write Unit (PFail): 1 00:07:39.114 Atomic Compare & Write Unit: 1 00:07:39.114 Fused Compare & Write: Not Supported 00:07:39.114 Scatter-Gather List 00:07:39.114 SGL Command Set: Supported 00:07:39.114 SGL Keyed: Not Supported 00:07:39.114 SGL Bit Bucket Descriptor: Not Supported 00:07:39.114 SGL Metadata Pointer: Not Supported 00:07:39.114 Oversized SGL: Not Supported 00:07:39.114 SGL Metadata Address: Not Supported 00:07:39.114 SGL Offset: Not Supported 00:07:39.114 Transport SGL Data Block: Not Supported 00:07:39.114 Replay Protected Memory Block: Not Supported 00:07:39.114 00:07:39.114 Firmware Slot Information 00:07:39.114 ========================= 00:07:39.114 Active slot: 1 00:07:39.114 Slot 1 Firmware Revision: 1.0 00:07:39.114 00:07:39.114 00:07:39.114 Commands Supported and Effects 00:07:39.114 ============================== 00:07:39.114 Admin Commands 00:07:39.114 -------------- 00:07:39.114 Delete I/O Submission Queue (00h): Supported 00:07:39.114 Create I/O Submission Queue (01h): Supported 00:07:39.114 Get Log Page (02h): Supported 00:07:39.114 Delete I/O Completion Queue (04h): Supported 00:07:39.114 Create I/O Completion Queue (05h): Supported 00:07:39.114 Identify (06h): Supported 00:07:39.114 Abort (08h): Supported 00:07:39.114 Set Features (09h): Supported 00:07:39.114 Get Features (0Ah): Supported 00:07:39.114 Asynchronous Event Request (0Ch): Supported 00:07:39.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.114 Directive Send (19h): Supported 00:07:39.114 Directive Receive (1Ah): Supported 00:07:39.114 Virtualization Management (1Ch): Supported 00:07:39.114 Doorbell Buffer Config (7Ch): Supported 00:07:39.114 Format NVM (80h): Supported LBA-Change 00:07:39.114 I/O Commands 00:07:39.114 ------------ 00:07:39.114 Flush (00h): Supported LBA-Change 00:07:39.114 Write (01h): Supported LBA-Change 00:07:39.114 Read (02h): Supported 00:07:39.114 Compare (05h): Supported 00:07:39.114 Write Zeroes (08h): Supported LBA-Change 00:07:39.114 Dataset Management (09h): Supported LBA-Change 00:07:39.114 Unknown (0Ch): Supported 00:07:39.114 Unknown (12h): Supported 00:07:39.114 Copy (19h): Supported LBA-Change 00:07:39.114 Unknown (1Dh): Supported LBA-Change 00:07:39.114 00:07:39.114 Error Log 00:07:39.114 ========= 00:07:39.115 00:07:39.115 Arbitration 00:07:39.115 =========== 00:07:39.115 Arbitration Burst: no limit 00:07:39.115 00:07:39.115 Power Management 00:07:39.115 ================ 00:07:39.115 Number of Power States: 1 00:07:39.115 Current Power State: Power State #0 00:07:39.115 Power State #0: 00:07:39.115 Max Power: 25.00 W 00:07:39.115 Non-Operational State: Operational 00:07:39.115 Entry Latency: 16 microseconds 00:07:39.115 Exit Latency: 4 microseconds 00:07:39.115 Relative Read Throughput: 0 00:07:39.115 Relative Read Latency: 0 00:07:39.115 Relative Write Throughput: 0 00:07:39.115 Relative Write Latency: 0 00:07:39.115 Idle Power: Not Reported 00:07:39.115 Active Power: Not Reported 00:07:39.115 Non-Operational Permissive Mode: Not Supported 00:07:39.115 00:07:39.115 Health Information 00:07:39.115 ================== 00:07:39.115 Critical Warnings: 00:07:39.115 Available Spare Space: OK 00:07:39.115 Temperature: OK 00:07:39.115 Device Reliability: OK 00:07:39.115 Read Only: No 00:07:39.115 Volatile Memory Backup: OK 00:07:39.115 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.115 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.115 Available Spare: 0% 00:07:39.115 Available Spare Threshold: 0% 00:07:39.115 Life Percentage Used: 0% 00:07:39.115 Data Units Read: 724 00:07:39.115 Data Units Written: 653 00:07:39.115 Host Read Commands: 33453 00:07:39.115 Host Write Commands: 32876 00:07:39.115 Controller Busy Time: 0 minutes 00:07:39.115 Power Cycles: 0 00:07:39.115 Power On Hours: 0 hours 00:07:39.115 Unsafe Shutdowns: 0 00:07:39.115 Unrecoverable Media Errors: 0 00:07:39.115 Lifetime Error Log Entries: 0 00:07:39.115 Warning Temperature Time: 0 minutes 00:07:39.115 Critical Temperature Time: 0 minutes 00:07:39.115 00:07:39.115 Number of Queues 00:07:39.115 ================ 00:07:39.115 Number of I/O Submission Queues: 64 00:07:39.115 Number of I/O Completion Queues: 64 00:07:39.115 00:07:39.115 ZNS Specific Controller Data 00:07:39.115 ============================ 00:07:39.115 Zone Append Size Limit: 0 00:07:39.115 00:07:39.115 00:07:39.115 Active Namespaces 00:07:39.115 ================= 00:07:39.115 Namespace ID:1 00:07:39.115 Error Recovery Timeout: Unlimited 00:07:39.115 Command Set Identifier: NVM (00h) 00:07:39.115 Deallocate: Supported 00:07:39.115 Deallocated/Unwritten Error: Supported 00:07:39.115 Deallocated Read Value: All 0x00 00:07:39.115 Deallocate in Write Zeroes: Not Supported 00:07:39.115 Deallocated Guard Field: 0xFFFF 00:07:39.115 Flush: Supported 00:07:39.115 Reservation: Not Supported 00:07:39.115 Namespace Sharing Capabilities: Multiple Controllers 00:07:39.115 Size (in LBAs): 262144 (1GiB) 00:07:39.115 Capacity (in LBAs): 262144 (1GiB) 00:07:39.115 Utilization (in LBAs): 262144 (1GiB) 00:07:39.115 Thin Provisioning: Not Supported 00:07:39.115 Per-NS Atomic Units: No 00:07:39.115 Maximum Single Source Range Length: 128 00:07:39.115 Maximum Copy Length: 128 00:07:39.115 Maximum Source Range Count: 128 00:07:39.115 NGUID/EUI64 Never Reused: No 00:07:39.115 Namespace Write Protected: No 00:07:39.115 Endurance group ID: 1 00:07:39.115 Number of LBA Formats: 8 00:07:39.115 Current LBA Format: LBA Format #04 00:07:39.115 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.115 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.115 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.115 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.115 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.115 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.115 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.115 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.115 00:07:39.115 Get Feature FDP: 00:07:39.115 ================ 00:07:39.115 Enabled: Yes 00:07:39.115 FDP configuration index: 0 00:07:39.115 00:07:39.115 FDP configurations log page 00:07:39.115 =========================== 00:07:39.115 Number of FDP configurations: 1 00:07:39.115 Version: 0 00:07:39.115 Size: 112 00:07:39.115 FDP Configuration Descriptor: 0 00:07:39.115 Descriptor Size: 96 00:07:39.115 Reclaim Group Identifier format: 2 00:07:39.115 FDP Volatile Write Cache: Not Present 00:07:39.115 FDP Configuration: Valid 00:07:39.115 Vendor Specific Size: 0 00:07:39.115 Number of Reclaim Groups: 2 00:07:39.115 Number of Recalim Unit Handles: 8 00:07:39.115 Max Placement Identifiers: 128 00:07:39.115 Number of Namespaces Suppprted: 256 00:07:39.115 Reclaim unit Nominal Size: 6000000 bytes 00:07:39.115 Estimated Reclaim Unit Time Limit: Not Reported 00:07:39.115 RUH Desc #000: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #001: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #002: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #003: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #004: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #005: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #006: RUH Type: Initially Isolated 00:07:39.115 RUH Desc #007: RUH Type: Initially Isolated 00:07:39.115 00:07:39.115 FDP reclaim unit handle usage log page 00:07:39.115 ====================================== 00:07:39.115 Number of Reclaim Unit Handles: 8 00:07:39.115 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:39.115 RUH Usage Desc #001: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #002: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #003: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #004: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #005: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #006: RUH Attributes: Unused 00:07:39.115 RUH Usage Desc #007: RUH Attributes: Unused 00:07:39.115 00:07:39.115 FDP statistics log page 00:07:39.115 ======================= 00:07:39.115 Host bytes with metadata written: 382836736 00:07:39.115 Media[2024-12-06 10:06:45.214144] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63055 terminated unexpected 00:07:39.115 bytes with metadata written: 382877696 00:07:39.115 Media bytes erased: 0 00:07:39.115 00:07:39.115 FDP events log page 00:07:39.115 =================== 00:07:39.115 Number of FDP events: 0 00:07:39.115 00:07:39.115 NVM Specific Namespace Data 00:07:39.115 =========================== 00:07:39.115 Logical Block Storage Tag Mask: 0 00:07:39.115 Protection Information Capabilities: 00:07:39.115 16b Guard Protection Information Storage Tag Support: No 00:07:39.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.115 Storage Tag Check Read Support: No 00:07:39.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.115 ===================================================== 00:07:39.115 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.115 ===================================================== 00:07:39.115 Controller Capabilities/Features 00:07:39.115 ================================ 00:07:39.115 Vendor ID: 1b36 00:07:39.115 Subsystem Vendor ID: 1af4 00:07:39.115 Serial Number: 12342 00:07:39.115 Model Number: QEMU NVMe Ctrl 00:07:39.115 Firmware Version: 8.0.0 00:07:39.115 Recommended Arb Burst: 6 00:07:39.115 IEEE OUI Identifier: 00 54 52 00:07:39.115 Multi-path I/O 00:07:39.115 May have multiple subsystem ports: No 00:07:39.115 May have multiple controllers: No 00:07:39.115 Associated with SR-IOV VF: No 00:07:39.115 Max Data Transfer Size: 524288 00:07:39.115 Max Number of Namespaces: 256 00:07:39.115 Max Number of I/O Queues: 64 00:07:39.115 NVMe Specification Version (VS): 1.4 00:07:39.115 NVMe Specification Version (Identify): 1.4 00:07:39.115 Maximum Queue Entries: 2048 00:07:39.115 Contiguous Queues Required: Yes 00:07:39.115 Arbitration Mechanisms Supported 00:07:39.115 Weighted Round Robin: Not Supported 00:07:39.115 Vendor Specific: Not Supported 00:07:39.115 Reset Timeout: 7500 ms 00:07:39.115 Doorbell Stride: 4 bytes 00:07:39.115 NVM Subsystem Reset: Not Supported 00:07:39.115 Command Sets Supported 00:07:39.115 NVM Command Set: Supported 00:07:39.115 Boot Partition: Not Supported 00:07:39.115 Memory Page Size Minimum: 4096 bytes 00:07:39.115 Memory Page Size Maximum: 65536 bytes 00:07:39.115 Persistent Memory Region: Not Supported 00:07:39.115 Optional Asynchronous Events Supported 00:07:39.115 Namespace Attribute Notices: Supported 00:07:39.115 Firmware Activation Notices: Not Supported 00:07:39.115 ANA Change Notices: Not Supported 00:07:39.115 PLE Aggregate Log Change Notices: Not Supported 00:07:39.115 LBA Status Info Alert Notices: Not Supported 00:07:39.115 EGE Aggregate Log Change Notices: Not Supported 00:07:39.115 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.115 Zone Descriptor Change Notices: Not Supported 00:07:39.115 Discovery Log Change Notices: Not Supported 00:07:39.115 Controller Attributes 00:07:39.115 128-bit Host Identifier: Not Supported 00:07:39.115 Non-Operational Permissive Mode: Not Supported 00:07:39.115 NVM Sets: Not Supported 00:07:39.115 Read Recovery Levels: Not Supported 00:07:39.115 Endurance Groups: Not Supported 00:07:39.115 Predictable Latency Mode: Not Supported 00:07:39.115 Traffic Based Keep ALive: Not Supported 00:07:39.115 Namespace Granularity: Not Supported 00:07:39.115 SQ Associations: Not Supported 00:07:39.115 UUID List: Not Supported 00:07:39.115 Multi-Domain Subsystem: Not Supported 00:07:39.115 Fixed Capacity Management: Not Supported 00:07:39.115 Variable Capacity Management: Not Supported 00:07:39.115 Delete Endurance Group: Not Supported 00:07:39.115 Delete NVM Set: Not Supported 00:07:39.115 Extended LBA Formats Supported: Supported 00:07:39.115 Flexible Data Placement Supported: Not Supported 00:07:39.115 00:07:39.115 Controller Memory Buffer Support 00:07:39.115 ================================ 00:07:39.115 Supported: No 00:07:39.115 00:07:39.115 Persistent Memory Region Support 00:07:39.115 ================================ 00:07:39.115 Supported: No 00:07:39.115 00:07:39.115 Admin Command Set Attributes 00:07:39.115 ============================ 00:07:39.115 Security Send/Receive: Not Supported 00:07:39.115 Format NVM: Supported 00:07:39.115 Firmware Activate/Download: Not Supported 00:07:39.115 Namespace Management: Supported 00:07:39.115 Device Self-Test: Not Supported 00:07:39.115 Directives: Supported 00:07:39.115 NVMe-MI: Not Supported 00:07:39.115 Virtualization Management: Not Supported 00:07:39.115 Doorbell Buffer Config: Supported 00:07:39.115 Get LBA Status Capability: Not Supported 00:07:39.115 Command & Feature Lockdown Capability: Not Supported 00:07:39.115 Abort Command Limit: 4 00:07:39.115 Async Event Request Limit: 4 00:07:39.116 Number of Firmware Slots: N/A 00:07:39.116 Firmware Slot 1 Read-Only: N/A 00:07:39.116 Firmware Activation Without Reset: N/A 00:07:39.116 Multiple Update Detection Support: N/A 00:07:39.116 Firmware Update Granularity: No Information Provided 00:07:39.116 Per-Namespace SMART Log: Yes 00:07:39.116 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.116 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:39.116 Command Effects Log Page: Supported 00:07:39.116 Get Log Page Extended Data: Supported 00:07:39.116 Telemetry Log Pages: Not Supported 00:07:39.116 Persistent Event Log Pages: Not Supported 00:07:39.116 Supported Log Pages Log Page: May Support 00:07:39.116 Commands Supported & Effects Log Page: Not Supported 00:07:39.116 Feature Identifiers & Effects Log Page:May Support 00:07:39.116 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.116 Data Area 4 for Telemetry Log: Not Supported 00:07:39.116 Error Log Page Entries Supported: 1 00:07:39.116 Keep Alive: Not Supported 00:07:39.116 00:07:39.116 NVM Command Set Attributes 00:07:39.116 ========================== 00:07:39.116 Submission Queue Entry Size 00:07:39.116 Max: 64 00:07:39.116 Min: 64 00:07:39.116 Completion Queue Entry Size 00:07:39.116 Max: 16 00:07:39.116 Min: 16 00:07:39.116 Number of Namespaces: 256 00:07:39.116 Compare Command: Supported 00:07:39.116 Write Uncorrectable Command: Not Supported 00:07:39.116 Dataset Management Command: Supported 00:07:39.116 Write Zeroes Command: Supported 00:07:39.116 Set Features Save Field: Supported 00:07:39.116 Reservations: Not Supported 00:07:39.116 Timestamp: Supported 00:07:39.116 Copy: Supported 00:07:39.116 Volatile Write Cache: Present 00:07:39.116 Atomic Write Unit (Normal): 1 00:07:39.116 Atomic Write Unit (PFail): 1 00:07:39.116 Atomic Compare & Write Unit: 1 00:07:39.116 Fused Compare & Write: Not Supported 00:07:39.116 Scatter-Gather List 00:07:39.116 SGL Command Set: Supported 00:07:39.116 SGL Keyed: Not Supported 00:07:39.116 SGL Bit Bucket Descriptor: Not Supported 00:07:39.116 SGL Metadata Pointer: Not Supported 00:07:39.116 Oversized SGL: Not Supported 00:07:39.116 SGL Metadata Address: Not Supported 00:07:39.116 SGL Offset: Not Supported 00:07:39.116 Transport SGL Data Block: Not Supported 00:07:39.116 Replay Protected Memory Block: Not Supported 00:07:39.116 00:07:39.116 Firmware Slot Information 00:07:39.116 ========================= 00:07:39.116 Active slot: 1 00:07:39.116 Slot 1 Firmware Revision: 1.0 00:07:39.116 00:07:39.116 00:07:39.116 Commands Supported and Effects 00:07:39.116 ============================== 00:07:39.116 Admin Commands 00:07:39.116 -------------- 00:07:39.116 Delete I/O Submission Queue (00h): Supported 00:07:39.116 Create I/O Submission Queue (01h): Supported 00:07:39.116 Get Log Page (02h): Supported 00:07:39.116 Delete I/O Completion Queue (04h): Supported 00:07:39.116 Create I/O Completion Queue (05h): Supported 00:07:39.116 Identify (06h): Supported 00:07:39.116 Abort (08h): Supported 00:07:39.116 Set Features (09h): Supported 00:07:39.116 Get Features (0Ah): Supported 00:07:39.116 Asynchronous Event Request (0Ch): Supported 00:07:39.116 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.116 Directive Send (19h): Supported 00:07:39.116 Directive Receive (1Ah): Supported 00:07:39.116 Virtualization Management (1Ch): Supported 00:07:39.116 Doorbell Buffer Config (7Ch): Supported 00:07:39.116 Format NVM (80h): Supported LBA-Change 00:07:39.116 I/O Commands 00:07:39.116 ------------ 00:07:39.116 Flush (00h): Supported LBA-Change 00:07:39.116 Write (01h): Supported LBA-Change 00:07:39.116 Read (02h): Supported 00:07:39.116 Compare (05h): Supported 00:07:39.116 Write Zeroes (08h): Supported LBA-Change 00:07:39.116 Dataset Management (09h): Supported LBA-Change 00:07:39.116 Unknown (0Ch): Supported 00:07:39.116 Unknown (12h): Supported 00:07:39.116 Copy (19h): Supported LBA-Change 00:07:39.116 Unknown (1Dh): Supported LBA-Change 00:07:39.116 00:07:39.116 Error Log 00:07:39.116 ========= 00:07:39.116 00:07:39.116 Arbitration 00:07:39.116 =========== 00:07:39.116 Arbitration Burst: no limit 00:07:39.116 00:07:39.116 Power Management 00:07:39.116 ================ 00:07:39.116 Number of Power States: 1 00:07:39.116 Current Power State: Power State #0 00:07:39.116 Power State #0: 00:07:39.116 Max Power: 25.00 W 00:07:39.116 Non-Operational State: Operational 00:07:39.116 Entry Latency: 16 microseconds 00:07:39.116 Exit Latency: 4 microseconds 00:07:39.116 Relative Read Throughput: 0 00:07:39.116 Relative Read Latency: 0 00:07:39.116 Relative Write Throughput: 0 00:07:39.116 Relative Write Latency: 0 00:07:39.116 Idle Power: Not Reported 00:07:39.116 Active Power: Not Reported 00:07:39.116 Non-Operational Permissive Mode: Not Supported 00:07:39.116 00:07:39.116 Health Information 00:07:39.116 ================== 00:07:39.116 Critical Warnings: 00:07:39.116 Available Spare Space: OK 00:07:39.116 Temperature: OK 00:07:39.116 Device Reliability: OK 00:07:39.116 Read Only: No 00:07:39.116 Volatile Memory Backup: OK 00:07:39.116 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.116 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.116 Available Spare: 0% 00:07:39.116 Available Spare Threshold: 0% 00:07:39.116 Life Percentage Used: 0% 00:07:39.116 Data Units Read: 1956 00:07:39.116 Data Units Written: 1743 00:07:39.116 Host Read Commands: 98202 00:07:39.116 Host Write Commands: 96471 00:07:39.116 Controller Busy Time: 0 minutes 00:07:39.116 Power Cycles: 0 00:07:39.116 Power On Hours: 0 hours 00:07:39.116 Unsafe Shutdowns: 0 00:07:39.116 Unrecoverable Media Errors: 0 00:07:39.116 Lifetime Error Log Entries: 0 00:07:39.116 Warning Temperature Time: 0 minutes 00:07:39.116 Critical Temperature Time: 0 minutes 00:07:39.116 00:07:39.116 Number of Queues 00:07:39.116 ================ 00:07:39.116 Number of I/O Submission Queues: 64 00:07:39.116 Number of I/O Completion Queues: 64 00:07:39.116 00:07:39.116 ZNS Specific Controller Data 00:07:39.116 ============================ 00:07:39.116 Zone Append Size Limit: 0 00:07:39.116 00:07:39.116 00:07:39.116 Active Namespaces 00:07:39.116 ================= 00:07:39.116 Namespace ID:1 00:07:39.116 Error Recovery Timeout: Unlimited 00:07:39.116 Command Set Identifier: NVM (00h) 00:07:39.116 Deallocate: Supported 00:07:39.116 Deallocated/Unwritten Error: Supported 00:07:39.116 Deallocated Read Value: All 0x00 00:07:39.116 Deallocate in Write Zeroes: Not Supported 00:07:39.116 Deallocated Guard Field: 0xFFFF 00:07:39.116 Flush: Supported 00:07:39.116 Reservation: Not Supported 00:07:39.116 Namespace Sharing Capabilities: Private 00:07:39.116 Size (in LBAs): 1048576 (4GiB) 00:07:39.116 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.116 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.116 Thin Provisioning: Not Supported 00:07:39.116 Per-NS Atomic Units: No 00:07:39.116 Maximum Single Source Range Length: 128 00:07:39.116 Maximum Copy Length: 128 00:07:39.116 Maximum Source Range Count: 128 00:07:39.116 NGUID/EUI64 Never Reused: No 00:07:39.116 Namespace Write Protected: No 00:07:39.116 Number of LBA Formats: 8 00:07:39.116 Current LBA Format: LBA Format #04 00:07:39.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.116 00:07:39.116 NVM Specific Namespace Data 00:07:39.116 =========================== 00:07:39.116 Logical Block Storage Tag Mask: 0 00:07:39.116 Protection Information Capabilities: 00:07:39.116 16b Guard Protection Information Storage Tag Support: No 00:07:39.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.116 Storage Tag Check Read Support: No 00:07:39.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Namespace ID:2 00:07:39.116 Error Recovery Timeout: Unlimited 00:07:39.116 Command Set Identifier: NVM (00h) 00:07:39.116 Deallocate: Supported 00:07:39.116 Deallocated/Unwritten Error: Supported 00:07:39.116 Deallocated Read Value: All 0x00 00:07:39.116 Deallocate in Write Zeroes: Not Supported 00:07:39.116 Deallocated Guard Field: 0xFFFF 00:07:39.116 Flush: Supported 00:07:39.116 Reservation: Not Supported 00:07:39.116 Namespace Sharing Capabilities: Private 00:07:39.116 Size (in LBAs): 1048576 (4GiB) 00:07:39.116 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.116 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.116 Thin Provisioning: Not Supported 00:07:39.116 Per-NS Atomic Units: No 00:07:39.116 Maximum Single Source Range Length: 128 00:07:39.116 Maximum Copy Length: 128 00:07:39.116 Maximum Source Range Count: 128 00:07:39.116 NGUID/EUI64 Never Reused: No 00:07:39.116 Namespace Write Protected: No 00:07:39.116 Number of LBA Formats: 8 00:07:39.116 Current LBA Format: LBA Format #04 00:07:39.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.116 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.116 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.116 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.116 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.116 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.116 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.116 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.116 00:07:39.116 NVM Specific Namespace Data 00:07:39.116 =========================== 00:07:39.116 Logical Block Storage Tag Mask: 0 00:07:39.116 Protection Information Capabilities: 00:07:39.116 16b Guard Protection Information Storage Tag Support: No 00:07:39.116 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.116 Storage Tag Check Read Support: No 00:07:39.116 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.116 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.120 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.120 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.120 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.120 Namespace ID:3 00:07:39.120 Error Recovery Timeout: Unlimited 00:07:39.120 Command Set Identifier: NVM (00h) 00:07:39.120 Deallocate: Supported 00:07:39.120 Deallocated/Unwritten Error: Supported 00:07:39.120 Deallocated Read Value: All 0x00 00:07:39.120 Deallocate in Write Zeroes: Not Supported 00:07:39.120 Deallocated Guard Field: 0xFFFF 00:07:39.120 Flush: Supported 00:07:39.120 Reservation: Not Supported 00:07:39.120 Namespace Sharing Capabilities: Private 00:07:39.120 Size (in LBAs): 1048576 (4GiB) 00:07:39.120 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.120 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.120 Thin Provisioning: Not Supported 00:07:39.120 Per-NS Atomic Units: No 00:07:39.120 Maximum Single Source Range Length: 128 00:07:39.120 Maximum Copy Length: 128 00:07:39.120 Maximum Source Range Count: 128 00:07:39.120 NGUID/EUI64 Never Reused: No 00:07:39.120 Namespace Write Protected: No 00:07:39.121 Number of LBA Formats: 8 00:07:39.121 Current LBA Format: LBA Format #04 00:07:39.121 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.121 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.121 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.121 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.121 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.121 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.121 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.121 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.121 00:07:39.121 NVM Specific Namespace Data 00:07:39.121 =========================== 00:07:39.121 Logical Block Storage Tag Mask: 0 00:07:39.121 Protection Information Capabilities: 00:07:39.121 16b Guard Protection Information Storage Tag Support: No 00:07:39.121 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.121 Storage Tag Check Read Support: No 00:07:39.121 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.121 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.121 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.121 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.121 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.121 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.122 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.122 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.122 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.122 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:39.384 ===================================================== 00:07:39.384 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:39.384 ===================================================== 00:07:39.384 Controller Capabilities/Features 00:07:39.384 ================================ 00:07:39.384 Vendor ID: 1b36 00:07:39.384 Subsystem Vendor ID: 1af4 00:07:39.384 Serial Number: 12340 00:07:39.384 Model Number: QEMU NVMe Ctrl 00:07:39.384 Firmware Version: 8.0.0 00:07:39.384 Recommended Arb Burst: 6 00:07:39.384 IEEE OUI Identifier: 00 54 52 00:07:39.384 Multi-path I/O 00:07:39.384 May have multiple subsystem ports: No 00:07:39.384 May have multiple controllers: No 00:07:39.384 Associated with SR-IOV VF: No 00:07:39.384 Max Data Transfer Size: 524288 00:07:39.384 Max Number of Namespaces: 256 00:07:39.384 Max Number of I/O Queues: 64 00:07:39.384 NVMe Specification Version (VS): 1.4 00:07:39.384 NVMe Specification Version (Identify): 1.4 00:07:39.384 Maximum Queue Entries: 2048 00:07:39.384 Contiguous Queues Required: Yes 00:07:39.384 Arbitration Mechanisms Supported 00:07:39.384 Weighted Round Robin: Not Supported 00:07:39.384 Vendor Specific: Not Supported 00:07:39.384 Reset Timeout: 7500 ms 00:07:39.384 Doorbell Stride: 4 bytes 00:07:39.384 NVM Subsystem Reset: Not Supported 00:07:39.384 Command Sets Supported 00:07:39.384 NVM Command Set: Supported 00:07:39.384 Boot Partition: Not Supported 00:07:39.385 Memory Page Size Minimum: 4096 bytes 00:07:39.385 Memory Page Size Maximum: 65536 bytes 00:07:39.385 Persistent Memory Region: Not Supported 00:07:39.385 Optional Asynchronous Events Supported 00:07:39.385 Namespace Attribute Notices: Supported 00:07:39.385 Firmware Activation Notices: Not Supported 00:07:39.385 ANA Change Notices: Not Supported 00:07:39.385 PLE Aggregate Log Change Notices: Not Supported 00:07:39.385 LBA Status Info Alert Notices: Not Supported 00:07:39.385 EGE Aggregate Log Change Notices: Not Supported 00:07:39.385 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.385 Zone Descriptor Change Notices: Not Supported 00:07:39.385 Discovery Log Change Notices: Not Supported 00:07:39.385 Controller Attributes 00:07:39.385 128-bit Host Identifier: Not Supported 00:07:39.385 Non-Operational Permissive Mode: Not Supported 00:07:39.385 NVM Sets: Not Supported 00:07:39.385 Read Recovery Levels: Not Supported 00:07:39.385 Endurance Groups: Not Supported 00:07:39.385 Predictable Latency Mode: Not Supported 00:07:39.385 Traffic Based Keep ALive: Not Supported 00:07:39.385 Namespace Granularity: Not Supported 00:07:39.385 SQ Associations: Not Supported 00:07:39.385 UUID List: Not Supported 00:07:39.385 Multi-Domain Subsystem: Not Supported 00:07:39.385 Fixed Capacity Management: Not Supported 00:07:39.385 Variable Capacity Management: Not Supported 00:07:39.385 Delete Endurance Group: Not Supported 00:07:39.385 Delete NVM Set: Not Supported 00:07:39.385 Extended LBA Formats Supported: Supported 00:07:39.385 Flexible Data Placement Supported: Not Supported 00:07:39.385 00:07:39.385 Controller Memory Buffer Support 00:07:39.385 ================================ 00:07:39.385 Supported: No 00:07:39.385 00:07:39.385 Persistent Memory Region Support 00:07:39.385 ================================ 00:07:39.385 Supported: No 00:07:39.385 00:07:39.385 Admin Command Set Attributes 00:07:39.385 ============================ 00:07:39.385 Security Send/Receive: Not Supported 00:07:39.385 Format NVM: Supported 00:07:39.385 Firmware Activate/Download: Not Supported 00:07:39.385 Namespace Management: Supported 00:07:39.385 Device Self-Test: Not Supported 00:07:39.385 Directives: Supported 00:07:39.385 NVMe-MI: Not Supported 00:07:39.385 Virtualization Management: Not Supported 00:07:39.385 Doorbell Buffer Config: Supported 00:07:39.385 Get LBA Status Capability: Not Supported 00:07:39.385 Command & Feature Lockdown Capability: Not Supported 00:07:39.385 Abort Command Limit: 4 00:07:39.385 Async Event Request Limit: 4 00:07:39.385 Number of Firmware Slots: N/A 00:07:39.385 Firmware Slot 1 Read-Only: N/A 00:07:39.385 Firmware Activation Without Reset: N/A 00:07:39.385 Multiple Update Detection Support: N/A 00:07:39.385 Firmware Update Granularity: No Information Provided 00:07:39.385 Per-Namespace SMART Log: Yes 00:07:39.385 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.385 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:39.385 Command Effects Log Page: Supported 00:07:39.385 Get Log Page Extended Data: Supported 00:07:39.385 Telemetry Log Pages: Not Supported 00:07:39.385 Persistent Event Log Pages: Not Supported 00:07:39.385 Supported Log Pages Log Page: May Support 00:07:39.385 Commands Supported & Effects Log Page: Not Supported 00:07:39.385 Feature Identifiers & Effects Log Page:May Support 00:07:39.385 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.385 Data Area 4 for Telemetry Log: Not Supported 00:07:39.385 Error Log Page Entries Supported: 1 00:07:39.385 Keep Alive: Not Supported 00:07:39.385 00:07:39.385 NVM Command Set Attributes 00:07:39.385 ========================== 00:07:39.385 Submission Queue Entry Size 00:07:39.385 Max: 64 00:07:39.385 Min: 64 00:07:39.385 Completion Queue Entry Size 00:07:39.385 Max: 16 00:07:39.385 Min: 16 00:07:39.385 Number of Namespaces: 256 00:07:39.385 Compare Command: Supported 00:07:39.385 Write Uncorrectable Command: Not Supported 00:07:39.385 Dataset Management Command: Supported 00:07:39.385 Write Zeroes Command: Supported 00:07:39.385 Set Features Save Field: Supported 00:07:39.385 Reservations: Not Supported 00:07:39.385 Timestamp: Supported 00:07:39.385 Copy: Supported 00:07:39.385 Volatile Write Cache: Present 00:07:39.385 Atomic Write Unit (Normal): 1 00:07:39.385 Atomic Write Unit (PFail): 1 00:07:39.385 Atomic Compare & Write Unit: 1 00:07:39.385 Fused Compare & Write: Not Supported 00:07:39.385 Scatter-Gather List 00:07:39.385 SGL Command Set: Supported 00:07:39.385 SGL Keyed: Not Supported 00:07:39.385 SGL Bit Bucket Descriptor: Not Supported 00:07:39.385 SGL Metadata Pointer: Not Supported 00:07:39.385 Oversized SGL: Not Supported 00:07:39.385 SGL Metadata Address: Not Supported 00:07:39.385 SGL Offset: Not Supported 00:07:39.385 Transport SGL Data Block: Not Supported 00:07:39.385 Replay Protected Memory Block: Not Supported 00:07:39.385 00:07:39.385 Firmware Slot Information 00:07:39.385 ========================= 00:07:39.385 Active slot: 1 00:07:39.385 Slot 1 Firmware Revision: 1.0 00:07:39.385 00:07:39.385 00:07:39.385 Commands Supported and Effects 00:07:39.385 ============================== 00:07:39.385 Admin Commands 00:07:39.385 -------------- 00:07:39.385 Delete I/O Submission Queue (00h): Supported 00:07:39.385 Create I/O Submission Queue (01h): Supported 00:07:39.385 Get Log Page (02h): Supported 00:07:39.385 Delete I/O Completion Queue (04h): Supported 00:07:39.385 Create I/O Completion Queue (05h): Supported 00:07:39.385 Identify (06h): Supported 00:07:39.385 Abort (08h): Supported 00:07:39.385 Set Features (09h): Supported 00:07:39.385 Get Features (0Ah): Supported 00:07:39.385 Asynchronous Event Request (0Ch): Supported 00:07:39.385 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.385 Directive Send (19h): Supported 00:07:39.385 Directive Receive (1Ah): Supported 00:07:39.385 Virtualization Management (1Ch): Supported 00:07:39.385 Doorbell Buffer Config (7Ch): Supported 00:07:39.385 Format NVM (80h): Supported LBA-Change 00:07:39.385 I/O Commands 00:07:39.385 ------------ 00:07:39.385 Flush (00h): Supported LBA-Change 00:07:39.385 Write (01h): Supported LBA-Change 00:07:39.385 Read (02h): Supported 00:07:39.385 Compare (05h): Supported 00:07:39.385 Write Zeroes (08h): Supported LBA-Change 00:07:39.385 Dataset Management (09h): Supported LBA-Change 00:07:39.385 Unknown (0Ch): Supported 00:07:39.385 Unknown (12h): Supported 00:07:39.385 Copy (19h): Supported LBA-Change 00:07:39.385 Unknown (1Dh): Supported LBA-Change 00:07:39.385 00:07:39.385 Error Log 00:07:39.385 ========= 00:07:39.385 00:07:39.385 Arbitration 00:07:39.385 =========== 00:07:39.385 Arbitration Burst: no limit 00:07:39.385 00:07:39.385 Power Management 00:07:39.385 ================ 00:07:39.385 Number of Power States: 1 00:07:39.385 Current Power State: Power State #0 00:07:39.385 Power State #0: 00:07:39.385 Max Power: 25.00 W 00:07:39.385 Non-Operational State: Operational 00:07:39.385 Entry Latency: 16 microseconds 00:07:39.385 Exit Latency: 4 microseconds 00:07:39.385 Relative Read Throughput: 0 00:07:39.385 Relative Read Latency: 0 00:07:39.385 Relative Write Throughput: 0 00:07:39.385 Relative Write Latency: 0 00:07:39.385 Idle Power: Not Reported 00:07:39.385 Active Power: Not Reported 00:07:39.385 Non-Operational Permissive Mode: Not Supported 00:07:39.385 00:07:39.385 Health Information 00:07:39.385 ================== 00:07:39.385 Critical Warnings: 00:07:39.385 Available Spare Space: OK 00:07:39.385 Temperature: OK 00:07:39.385 Device Reliability: OK 00:07:39.385 Read Only: No 00:07:39.385 Volatile Memory Backup: OK 00:07:39.385 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.385 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.385 Available Spare: 0% 00:07:39.385 Available Spare Threshold: 0% 00:07:39.385 Life Percentage Used: 0% 00:07:39.385 Data Units Read: 609 00:07:39.385 Data Units Written: 537 00:07:39.385 Host Read Commands: 32133 00:07:39.385 Host Write Commands: 31919 00:07:39.385 Controller Busy Time: 0 minutes 00:07:39.385 Power Cycles: 0 00:07:39.385 Power On Hours: 0 hours 00:07:39.385 Unsafe Shutdowns: 0 00:07:39.385 Unrecoverable Media Errors: 0 00:07:39.385 Lifetime Error Log Entries: 0 00:07:39.385 Warning Temperature Time: 0 minutes 00:07:39.385 Critical Temperature Time: 0 minutes 00:07:39.385 00:07:39.385 Number of Queues 00:07:39.385 ================ 00:07:39.385 Number of I/O Submission Queues: 64 00:07:39.385 Number of I/O Completion Queues: 64 00:07:39.385 00:07:39.385 ZNS Specific Controller Data 00:07:39.385 ============================ 00:07:39.385 Zone Append Size Limit: 0 00:07:39.385 00:07:39.385 00:07:39.385 Active Namespaces 00:07:39.386 ================= 00:07:39.386 Namespace ID:1 00:07:39.386 Error Recovery Timeout: Unlimited 00:07:39.386 Command Set Identifier: NVM (00h) 00:07:39.386 Deallocate: Supported 00:07:39.386 Deallocated/Unwritten Error: Supported 00:07:39.386 Deallocated Read Value: All 0x00 00:07:39.386 Deallocate in Write Zeroes: Not Supported 00:07:39.386 Deallocated Guard Field: 0xFFFF 00:07:39.386 Flush: Supported 00:07:39.386 Reservation: Not Supported 00:07:39.386 Metadata Transferred as: Separate Metadata Buffer 00:07:39.386 Namespace Sharing Capabilities: Private 00:07:39.386 Size (in LBAs): 1548666 (5GiB) 00:07:39.386 Capacity (in LBAs): 1548666 (5GiB) 00:07:39.386 Utilization (in LBAs): 1548666 (5GiB) 00:07:39.386 Thin Provisioning: Not Supported 00:07:39.386 Per-NS Atomic Units: No 00:07:39.386 Maximum Single Source Range Length: 128 00:07:39.386 Maximum Copy Length: 128 00:07:39.386 Maximum Source Range Count: 128 00:07:39.386 NGUID/EUI64 Never Reused: No 00:07:39.386 Namespace Write Protected: No 00:07:39.386 Number of LBA Formats: 8 00:07:39.386 Current LBA Format: LBA Format #07 00:07:39.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.386 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.386 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.386 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.386 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.386 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.386 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.386 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.386 00:07:39.386 NVM Specific Namespace Data 00:07:39.386 =========================== 00:07:39.386 Logical Block Storage Tag Mask: 0 00:07:39.386 Protection Information Capabilities: 00:07:39.386 16b Guard Protection Information Storage Tag Support: No 00:07:39.386 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.386 Storage Tag Check Read Support: No 00:07:39.386 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.386 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.386 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:39.643 ===================================================== 00:07:39.643 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.643 ===================================================== 00:07:39.643 Controller Capabilities/Features 00:07:39.643 ================================ 00:07:39.643 Vendor ID: 1b36 00:07:39.643 Subsystem Vendor ID: 1af4 00:07:39.643 Serial Number: 12341 00:07:39.643 Model Number: QEMU NVMe Ctrl 00:07:39.643 Firmware Version: 8.0.0 00:07:39.643 Recommended Arb Burst: 6 00:07:39.643 IEEE OUI Identifier: 00 54 52 00:07:39.643 Multi-path I/O 00:07:39.643 May have multiple subsystem ports: No 00:07:39.643 May have multiple controllers: No 00:07:39.643 Associated with SR-IOV VF: No 00:07:39.643 Max Data Transfer Size: 524288 00:07:39.643 Max Number of Namespaces: 256 00:07:39.643 Max Number of I/O Queues: 64 00:07:39.643 NVMe Specification Version (VS): 1.4 00:07:39.643 NVMe Specification Version (Identify): 1.4 00:07:39.643 Maximum Queue Entries: 2048 00:07:39.643 Contiguous Queues Required: Yes 00:07:39.643 Arbitration Mechanisms Supported 00:07:39.643 Weighted Round Robin: Not Supported 00:07:39.643 Vendor Specific: Not Supported 00:07:39.643 Reset Timeout: 7500 ms 00:07:39.643 Doorbell Stride: 4 bytes 00:07:39.643 NVM Subsystem Reset: Not Supported 00:07:39.643 Command Sets Supported 00:07:39.643 NVM Command Set: Supported 00:07:39.643 Boot Partition: Not Supported 00:07:39.643 Memory Page Size Minimum: 4096 bytes 00:07:39.643 Memory Page Size Maximum: 65536 bytes 00:07:39.643 Persistent Memory Region: Not Supported 00:07:39.643 Optional Asynchronous Events Supported 00:07:39.643 Namespace Attribute Notices: Supported 00:07:39.643 Firmware Activation Notices: Not Supported 00:07:39.643 ANA Change Notices: Not Supported 00:07:39.643 PLE Aggregate Log Change Notices: Not Supported 00:07:39.643 LBA Status Info Alert Notices: Not Supported 00:07:39.643 EGE Aggregate Log Change Notices: Not Supported 00:07:39.643 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.643 Zone Descriptor Change Notices: Not Supported 00:07:39.643 Discovery Log Change Notices: Not Supported 00:07:39.643 Controller Attributes 00:07:39.643 128-bit Host Identifier: Not Supported 00:07:39.643 Non-Operational Permissive Mode: Not Supported 00:07:39.643 NVM Sets: Not Supported 00:07:39.643 Read Recovery Levels: Not Supported 00:07:39.643 Endurance Groups: Not Supported 00:07:39.643 Predictable Latency Mode: Not Supported 00:07:39.643 Traffic Based Keep ALive: Not Supported 00:07:39.643 Namespace Granularity: Not Supported 00:07:39.643 SQ Associations: Not Supported 00:07:39.643 UUID List: Not Supported 00:07:39.643 Multi-Domain Subsystem: Not Supported 00:07:39.643 Fixed Capacity Management: Not Supported 00:07:39.643 Variable Capacity Management: Not Supported 00:07:39.643 Delete Endurance Group: Not Supported 00:07:39.643 Delete NVM Set: Not Supported 00:07:39.643 Extended LBA Formats Supported: Supported 00:07:39.643 Flexible Data Placement Supported: Not Supported 00:07:39.643 00:07:39.643 Controller Memory Buffer Support 00:07:39.643 ================================ 00:07:39.643 Supported: No 00:07:39.643 00:07:39.643 Persistent Memory Region Support 00:07:39.643 ================================ 00:07:39.643 Supported: No 00:07:39.643 00:07:39.643 Admin Command Set Attributes 00:07:39.643 ============================ 00:07:39.643 Security Send/Receive: Not Supported 00:07:39.643 Format NVM: Supported 00:07:39.643 Firmware Activate/Download: Not Supported 00:07:39.643 Namespace Management: Supported 00:07:39.643 Device Self-Test: Not Supported 00:07:39.643 Directives: Supported 00:07:39.643 NVMe-MI: Not Supported 00:07:39.643 Virtualization Management: Not Supported 00:07:39.643 Doorbell Buffer Config: Supported 00:07:39.643 Get LBA Status Capability: Not Supported 00:07:39.643 Command & Feature Lockdown Capability: Not Supported 00:07:39.643 Abort Command Limit: 4 00:07:39.643 Async Event Request Limit: 4 00:07:39.643 Number of Firmware Slots: N/A 00:07:39.643 Firmware Slot 1 Read-Only: N/A 00:07:39.643 Firmware Activation Without Reset: N/A 00:07:39.643 Multiple Update Detection Support: N/A 00:07:39.643 Firmware Update Granularity: No Information Provided 00:07:39.643 Per-Namespace SMART Log: Yes 00:07:39.643 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.643 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:39.643 Command Effects Log Page: Supported 00:07:39.643 Get Log Page Extended Data: Supported 00:07:39.643 Telemetry Log Pages: Not Supported 00:07:39.643 Persistent Event Log Pages: Not Supported 00:07:39.643 Supported Log Pages Log Page: May Support 00:07:39.644 Commands Supported & Effects Log Page: Not Supported 00:07:39.644 Feature Identifiers & Effects Log Page:May Support 00:07:39.644 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.644 Data Area 4 for Telemetry Log: Not Supported 00:07:39.644 Error Log Page Entries Supported: 1 00:07:39.644 Keep Alive: Not Supported 00:07:39.644 00:07:39.644 NVM Command Set Attributes 00:07:39.644 ========================== 00:07:39.644 Submission Queue Entry Size 00:07:39.644 Max: 64 00:07:39.644 Min: 64 00:07:39.644 Completion Queue Entry Size 00:07:39.644 Max: 16 00:07:39.644 Min: 16 00:07:39.644 Number of Namespaces: 256 00:07:39.644 Compare Command: Supported 00:07:39.644 Write Uncorrectable Command: Not Supported 00:07:39.644 Dataset Management Command: Supported 00:07:39.644 Write Zeroes Command: Supported 00:07:39.644 Set Features Save Field: Supported 00:07:39.644 Reservations: Not Supported 00:07:39.644 Timestamp: Supported 00:07:39.644 Copy: Supported 00:07:39.644 Volatile Write Cache: Present 00:07:39.644 Atomic Write Unit (Normal): 1 00:07:39.644 Atomic Write Unit (PFail): 1 00:07:39.644 Atomic Compare & Write Unit: 1 00:07:39.644 Fused Compare & Write: Not Supported 00:07:39.644 Scatter-Gather List 00:07:39.644 SGL Command Set: Supported 00:07:39.644 SGL Keyed: Not Supported 00:07:39.644 SGL Bit Bucket Descriptor: Not Supported 00:07:39.644 SGL Metadata Pointer: Not Supported 00:07:39.644 Oversized SGL: Not Supported 00:07:39.644 SGL Metadata Address: Not Supported 00:07:39.644 SGL Offset: Not Supported 00:07:39.644 Transport SGL Data Block: Not Supported 00:07:39.644 Replay Protected Memory Block: Not Supported 00:07:39.644 00:07:39.644 Firmware Slot Information 00:07:39.644 ========================= 00:07:39.644 Active slot: 1 00:07:39.644 Slot 1 Firmware Revision: 1.0 00:07:39.644 00:07:39.644 00:07:39.644 Commands Supported and Effects 00:07:39.644 ============================== 00:07:39.644 Admin Commands 00:07:39.644 -------------- 00:07:39.644 Delete I/O Submission Queue (00h): Supported 00:07:39.644 Create I/O Submission Queue (01h): Supported 00:07:39.644 Get Log Page (02h): Supported 00:07:39.644 Delete I/O Completion Queue (04h): Supported 00:07:39.644 Create I/O Completion Queue (05h): Supported 00:07:39.644 Identify (06h): Supported 00:07:39.644 Abort (08h): Supported 00:07:39.644 Set Features (09h): Supported 00:07:39.644 Get Features (0Ah): Supported 00:07:39.644 Asynchronous Event Request (0Ch): Supported 00:07:39.644 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.644 Directive Send (19h): Supported 00:07:39.644 Directive Receive (1Ah): Supported 00:07:39.644 Virtualization Management (1Ch): Supported 00:07:39.644 Doorbell Buffer Config (7Ch): Supported 00:07:39.644 Format NVM (80h): Supported LBA-Change 00:07:39.644 I/O Commands 00:07:39.644 ------------ 00:07:39.644 Flush (00h): Supported LBA-Change 00:07:39.644 Write (01h): Supported LBA-Change 00:07:39.644 Read (02h): Supported 00:07:39.644 Compare (05h): Supported 00:07:39.644 Write Zeroes (08h): Supported LBA-Change 00:07:39.644 Dataset Management (09h): Supported LBA-Change 00:07:39.644 Unknown (0Ch): Supported 00:07:39.644 Unknown (12h): Supported 00:07:39.644 Copy (19h): Supported LBA-Change 00:07:39.644 Unknown (1Dh): Supported LBA-Change 00:07:39.644 00:07:39.644 Error Log 00:07:39.644 ========= 00:07:39.644 00:07:39.644 Arbitration 00:07:39.644 =========== 00:07:39.644 Arbitration Burst: no limit 00:07:39.644 00:07:39.644 Power Management 00:07:39.644 ================ 00:07:39.644 Number of Power States: 1 00:07:39.644 Current Power State: Power State #0 00:07:39.644 Power State #0: 00:07:39.644 Max Power: 25.00 W 00:07:39.644 Non-Operational State: Operational 00:07:39.644 Entry Latency: 16 microseconds 00:07:39.644 Exit Latency: 4 microseconds 00:07:39.644 Relative Read Throughput: 0 00:07:39.644 Relative Read Latency: 0 00:07:39.644 Relative Write Throughput: 0 00:07:39.644 Relative Write Latency: 0 00:07:39.644 Idle Power: Not Reported 00:07:39.644 Active Power: Not Reported 00:07:39.644 Non-Operational Permissive Mode: Not Supported 00:07:39.644 00:07:39.644 Health Information 00:07:39.644 ================== 00:07:39.644 Critical Warnings: 00:07:39.644 Available Spare Space: OK 00:07:39.644 Temperature: OK 00:07:39.644 Device Reliability: OK 00:07:39.644 Read Only: No 00:07:39.644 Volatile Memory Backup: OK 00:07:39.644 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.644 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.644 Available Spare: 0% 00:07:39.644 Available Spare Threshold: 0% 00:07:39.644 Life Percentage Used: 0% 00:07:39.644 Data Units Read: 930 00:07:39.644 Data Units Written: 803 00:07:39.644 Host Read Commands: 48009 00:07:39.644 Host Write Commands: 46902 00:07:39.644 Controller Busy Time: 0 minutes 00:07:39.644 Power Cycles: 0 00:07:39.644 Power On Hours: 0 hours 00:07:39.644 Unsafe Shutdowns: 0 00:07:39.644 Unrecoverable Media Errors: 0 00:07:39.644 Lifetime Error Log Entries: 0 00:07:39.644 Warning Temperature Time: 0 minutes 00:07:39.644 Critical Temperature Time: 0 minutes 00:07:39.644 00:07:39.644 Number of Queues 00:07:39.644 ================ 00:07:39.644 Number of I/O Submission Queues: 64 00:07:39.644 Number of I/O Completion Queues: 64 00:07:39.644 00:07:39.644 ZNS Specific Controller Data 00:07:39.644 ============================ 00:07:39.644 Zone Append Size Limit: 0 00:07:39.644 00:07:39.644 00:07:39.644 Active Namespaces 00:07:39.644 ================= 00:07:39.644 Namespace ID:1 00:07:39.644 Error Recovery Timeout: Unlimited 00:07:39.644 Command Set Identifier: NVM (00h) 00:07:39.644 Deallocate: Supported 00:07:39.644 Deallocated/Unwritten Error: Supported 00:07:39.644 Deallocated Read Value: All 0x00 00:07:39.644 Deallocate in Write Zeroes: Not Supported 00:07:39.644 Deallocated Guard Field: 0xFFFF 00:07:39.644 Flush: Supported 00:07:39.644 Reservation: Not Supported 00:07:39.644 Namespace Sharing Capabilities: Private 00:07:39.644 Size (in LBAs): 1310720 (5GiB) 00:07:39.644 Capacity (in LBAs): 1310720 (5GiB) 00:07:39.644 Utilization (in LBAs): 1310720 (5GiB) 00:07:39.644 Thin Provisioning: Not Supported 00:07:39.644 Per-NS Atomic Units: No 00:07:39.644 Maximum Single Source Range Length: 128 00:07:39.644 Maximum Copy Length: 128 00:07:39.644 Maximum Source Range Count: 128 00:07:39.644 NGUID/EUI64 Never Reused: No 00:07:39.644 Namespace Write Protected: No 00:07:39.644 Number of LBA Formats: 8 00:07:39.644 Current LBA Format: LBA Format #04 00:07:39.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.644 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.644 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.644 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.644 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.644 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.644 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.644 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.644 00:07:39.644 NVM Specific Namespace Data 00:07:39.644 =========================== 00:07:39.644 Logical Block Storage Tag Mask: 0 00:07:39.644 Protection Information Capabilities: 00:07:39.644 16b Guard Protection Information Storage Tag Support: No 00:07:39.644 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.644 Storage Tag Check Read Support: No 00:07:39.644 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.644 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.644 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.644 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.644 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.645 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.645 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.645 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.645 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.645 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:39.902 ===================================================== 00:07:39.902 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.902 ===================================================== 00:07:39.902 Controller Capabilities/Features 00:07:39.902 ================================ 00:07:39.902 Vendor ID: 1b36 00:07:39.902 Subsystem Vendor ID: 1af4 00:07:39.902 Serial Number: 12342 00:07:39.902 Model Number: QEMU NVMe Ctrl 00:07:39.902 Firmware Version: 8.0.0 00:07:39.902 Recommended Arb Burst: 6 00:07:39.902 IEEE OUI Identifier: 00 54 52 00:07:39.902 Multi-path I/O 00:07:39.902 May have multiple subsystem ports: No 00:07:39.902 May have multiple controllers: No 00:07:39.902 Associated with SR-IOV VF: No 00:07:39.902 Max Data Transfer Size: 524288 00:07:39.902 Max Number of Namespaces: 256 00:07:39.902 Max Number of I/O Queues: 64 00:07:39.902 NVMe Specification Version (VS): 1.4 00:07:39.902 NVMe Specification Version (Identify): 1.4 00:07:39.902 Maximum Queue Entries: 2048 00:07:39.902 Contiguous Queues Required: Yes 00:07:39.902 Arbitration Mechanisms Supported 00:07:39.902 Weighted Round Robin: Not Supported 00:07:39.902 Vendor Specific: Not Supported 00:07:39.902 Reset Timeout: 7500 ms 00:07:39.902 Doorbell Stride: 4 bytes 00:07:39.902 NVM Subsystem Reset: Not Supported 00:07:39.902 Command Sets Supported 00:07:39.902 NVM Command Set: Supported 00:07:39.902 Boot Partition: Not Supported 00:07:39.902 Memory Page Size Minimum: 4096 bytes 00:07:39.902 Memory Page Size Maximum: 65536 bytes 00:07:39.902 Persistent Memory Region: Not Supported 00:07:39.902 Optional Asynchronous Events Supported 00:07:39.902 Namespace Attribute Notices: Supported 00:07:39.902 Firmware Activation Notices: Not Supported 00:07:39.902 ANA Change Notices: Not Supported 00:07:39.902 PLE Aggregate Log Change Notices: Not Supported 00:07:39.902 LBA Status Info Alert Notices: Not Supported 00:07:39.902 EGE Aggregate Log Change Notices: Not Supported 00:07:39.902 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.902 Zone Descriptor Change Notices: Not Supported 00:07:39.902 Discovery Log Change Notices: Not Supported 00:07:39.902 Controller Attributes 00:07:39.902 128-bit Host Identifier: Not Supported 00:07:39.902 Non-Operational Permissive Mode: Not Supported 00:07:39.902 NVM Sets: Not Supported 00:07:39.902 Read Recovery Levels: Not Supported 00:07:39.902 Endurance Groups: Not Supported 00:07:39.902 Predictable Latency Mode: Not Supported 00:07:39.902 Traffic Based Keep ALive: Not Supported 00:07:39.902 Namespace Granularity: Not Supported 00:07:39.902 SQ Associations: Not Supported 00:07:39.902 UUID List: Not Supported 00:07:39.902 Multi-Domain Subsystem: Not Supported 00:07:39.902 Fixed Capacity Management: Not Supported 00:07:39.902 Variable Capacity Management: Not Supported 00:07:39.902 Delete Endurance Group: Not Supported 00:07:39.902 Delete NVM Set: Not Supported 00:07:39.902 Extended LBA Formats Supported: Supported 00:07:39.902 Flexible Data Placement Supported: Not Supported 00:07:39.902 00:07:39.902 Controller Memory Buffer Support 00:07:39.902 ================================ 00:07:39.902 Supported: No 00:07:39.902 00:07:39.902 Persistent Memory Region Support 00:07:39.902 ================================ 00:07:39.902 Supported: No 00:07:39.902 00:07:39.902 Admin Command Set Attributes 00:07:39.902 ============================ 00:07:39.902 Security Send/Receive: Not Supported 00:07:39.902 Format NVM: Supported 00:07:39.902 Firmware Activate/Download: Not Supported 00:07:39.902 Namespace Management: Supported 00:07:39.902 Device Self-Test: Not Supported 00:07:39.903 Directives: Supported 00:07:39.903 NVMe-MI: Not Supported 00:07:39.903 Virtualization Management: Not Supported 00:07:39.903 Doorbell Buffer Config: Supported 00:07:39.903 Get LBA Status Capability: Not Supported 00:07:39.903 Command & Feature Lockdown Capability: Not Supported 00:07:39.903 Abort Command Limit: 4 00:07:39.903 Async Event Request Limit: 4 00:07:39.903 Number of Firmware Slots: N/A 00:07:39.903 Firmware Slot 1 Read-Only: N/A 00:07:39.903 Firmware Activation Without Reset: N/A 00:07:39.903 Multiple Update Detection Support: N/A 00:07:39.903 Firmware Update Granularity: No Information Provided 00:07:39.903 Per-Namespace SMART Log: Yes 00:07:39.903 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.903 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:39.903 Command Effects Log Page: Supported 00:07:39.903 Get Log Page Extended Data: Supported 00:07:39.903 Telemetry Log Pages: Not Supported 00:07:39.903 Persistent Event Log Pages: Not Supported 00:07:39.903 Supported Log Pages Log Page: May Support 00:07:39.903 Commands Supported & Effects Log Page: Not Supported 00:07:39.903 Feature Identifiers & Effects Log Page:May Support 00:07:39.903 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.903 Data Area 4 for Telemetry Log: Not Supported 00:07:39.903 Error Log Page Entries Supported: 1 00:07:39.903 Keep Alive: Not Supported 00:07:39.903 00:07:39.903 NVM Command Set Attributes 00:07:39.903 ========================== 00:07:39.903 Submission Queue Entry Size 00:07:39.903 Max: 64 00:07:39.903 Min: 64 00:07:39.903 Completion Queue Entry Size 00:07:39.903 Max: 16 00:07:39.903 Min: 16 00:07:39.903 Number of Namespaces: 256 00:07:39.903 Compare Command: Supported 00:07:39.903 Write Uncorrectable Command: Not Supported 00:07:39.903 Dataset Management Command: Supported 00:07:39.903 Write Zeroes Command: Supported 00:07:39.903 Set Features Save Field: Supported 00:07:39.903 Reservations: Not Supported 00:07:39.903 Timestamp: Supported 00:07:39.903 Copy: Supported 00:07:39.903 Volatile Write Cache: Present 00:07:39.903 Atomic Write Unit (Normal): 1 00:07:39.903 Atomic Write Unit (PFail): 1 00:07:39.903 Atomic Compare & Write Unit: 1 00:07:39.903 Fused Compare & Write: Not Supported 00:07:39.903 Scatter-Gather List 00:07:39.903 SGL Command Set: Supported 00:07:39.903 SGL Keyed: Not Supported 00:07:39.903 SGL Bit Bucket Descriptor: Not Supported 00:07:39.903 SGL Metadata Pointer: Not Supported 00:07:39.903 Oversized SGL: Not Supported 00:07:39.903 SGL Metadata Address: Not Supported 00:07:39.903 SGL Offset: Not Supported 00:07:39.903 Transport SGL Data Block: Not Supported 00:07:39.903 Replay Protected Memory Block: Not Supported 00:07:39.903 00:07:39.903 Firmware Slot Information 00:07:39.903 ========================= 00:07:39.903 Active slot: 1 00:07:39.903 Slot 1 Firmware Revision: 1.0 00:07:39.903 00:07:39.903 00:07:39.903 Commands Supported and Effects 00:07:39.903 ============================== 00:07:39.903 Admin Commands 00:07:39.903 -------------- 00:07:39.903 Delete I/O Submission Queue (00h): Supported 00:07:39.903 Create I/O Submission Queue (01h): Supported 00:07:39.903 Get Log Page (02h): Supported 00:07:39.903 Delete I/O Completion Queue (04h): Supported 00:07:39.903 Create I/O Completion Queue (05h): Supported 00:07:39.903 Identify (06h): Supported 00:07:39.903 Abort (08h): Supported 00:07:39.903 Set Features (09h): Supported 00:07:39.903 Get Features (0Ah): Supported 00:07:39.903 Asynchronous Event Request (0Ch): Supported 00:07:39.903 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.903 Directive Send (19h): Supported 00:07:39.903 Directive Receive (1Ah): Supported 00:07:39.903 Virtualization Management (1Ch): Supported 00:07:39.903 Doorbell Buffer Config (7Ch): Supported 00:07:39.903 Format NVM (80h): Supported LBA-Change 00:07:39.903 I/O Commands 00:07:39.903 ------------ 00:07:39.903 Flush (00h): Supported LBA-Change 00:07:39.903 Write (01h): Supported LBA-Change 00:07:39.903 Read (02h): Supported 00:07:39.903 Compare (05h): Supported 00:07:39.903 Write Zeroes (08h): Supported LBA-Change 00:07:39.903 Dataset Management (09h): Supported LBA-Change 00:07:39.903 Unknown (0Ch): Supported 00:07:39.903 Unknown (12h): Supported 00:07:39.903 Copy (19h): Supported LBA-Change 00:07:39.903 Unknown (1Dh): Supported LBA-Change 00:07:39.903 00:07:39.903 Error Log 00:07:39.903 ========= 00:07:39.903 00:07:39.903 Arbitration 00:07:39.903 =========== 00:07:39.903 Arbitration Burst: no limit 00:07:39.903 00:07:39.903 Power Management 00:07:39.903 ================ 00:07:39.903 Number of Power States: 1 00:07:39.903 Current Power State: Power State #0 00:07:39.903 Power State #0: 00:07:39.903 Max Power: 25.00 W 00:07:39.903 Non-Operational State: Operational 00:07:39.903 Entry Latency: 16 microseconds 00:07:39.903 Exit Latency: 4 microseconds 00:07:39.903 Relative Read Throughput: 0 00:07:39.903 Relative Read Latency: 0 00:07:39.903 Relative Write Throughput: 0 00:07:39.903 Relative Write Latency: 0 00:07:39.903 Idle Power: Not Reported 00:07:39.903 Active Power: Not Reported 00:07:39.903 Non-Operational Permissive Mode: Not Supported 00:07:39.903 00:07:39.903 Health Information 00:07:39.903 ================== 00:07:39.903 Critical Warnings: 00:07:39.903 Available Spare Space: OK 00:07:39.903 Temperature: OK 00:07:39.903 Device Reliability: OK 00:07:39.903 Read Only: No 00:07:39.903 Volatile Memory Backup: OK 00:07:39.903 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.903 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.903 Available Spare: 0% 00:07:39.903 Available Spare Threshold: 0% 00:07:39.903 Life Percentage Used: 0% 00:07:39.903 Data Units Read: 1956 00:07:39.903 Data Units Written: 1743 00:07:39.903 Host Read Commands: 98202 00:07:39.903 Host Write Commands: 96471 00:07:39.903 Controller Busy Time: 0 minutes 00:07:39.903 Power Cycles: 0 00:07:39.903 Power On Hours: 0 hours 00:07:39.903 Unsafe Shutdowns: 0 00:07:39.903 Unrecoverable Media Errors: 0 00:07:39.903 Lifetime Error Log Entries: 0 00:07:39.903 Warning Temperature Time: 0 minutes 00:07:39.903 Critical Temperature Time: 0 minutes 00:07:39.903 00:07:39.903 Number of Queues 00:07:39.903 ================ 00:07:39.903 Number of I/O Submission Queues: 64 00:07:39.903 Number of I/O Completion Queues: 64 00:07:39.903 00:07:39.903 ZNS Specific Controller Data 00:07:39.903 ============================ 00:07:39.903 Zone Append Size Limit: 0 00:07:39.903 00:07:39.903 00:07:39.903 Active Namespaces 00:07:39.903 ================= 00:07:39.903 Namespace ID:1 00:07:39.903 Error Recovery Timeout: Unlimited 00:07:39.903 Command Set Identifier: NVM (00h) 00:07:39.903 Deallocate: Supported 00:07:39.903 Deallocated/Unwritten Error: Supported 00:07:39.903 Deallocated Read Value: All 0x00 00:07:39.903 Deallocate in Write Zeroes: Not Supported 00:07:39.903 Deallocated Guard Field: 0xFFFF 00:07:39.903 Flush: Supported 00:07:39.903 Reservation: Not Supported 00:07:39.903 Namespace Sharing Capabilities: Private 00:07:39.903 Size (in LBAs): 1048576 (4GiB) 00:07:39.903 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.903 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.903 Thin Provisioning: Not Supported 00:07:39.903 Per-NS Atomic Units: No 00:07:39.903 Maximum Single Source Range Length: 128 00:07:39.903 Maximum Copy Length: 128 00:07:39.903 Maximum Source Range Count: 128 00:07:39.903 NGUID/EUI64 Never Reused: No 00:07:39.903 Namespace Write Protected: No 00:07:39.903 Number of LBA Formats: 8 00:07:39.903 Current LBA Format: LBA Format #04 00:07:39.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.903 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.903 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.903 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.903 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.903 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.903 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.903 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.903 00:07:39.903 NVM Specific Namespace Data 00:07:39.903 =========================== 00:07:39.903 Logical Block Storage Tag Mask: 0 00:07:39.903 Protection Information Capabilities: 00:07:39.903 16b Guard Protection Information Storage Tag Support: No 00:07:39.903 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.903 Storage Tag Check Read Support: No 00:07:39.903 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.903 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.903 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.903 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Namespace ID:2 00:07:39.904 Error Recovery Timeout: Unlimited 00:07:39.904 Command Set Identifier: NVM (00h) 00:07:39.904 Deallocate: Supported 00:07:39.904 Deallocated/Unwritten Error: Supported 00:07:39.904 Deallocated Read Value: All 0x00 00:07:39.904 Deallocate in Write Zeroes: Not Supported 00:07:39.904 Deallocated Guard Field: 0xFFFF 00:07:39.904 Flush: Supported 00:07:39.904 Reservation: Not Supported 00:07:39.904 Namespace Sharing Capabilities: Private 00:07:39.904 Size (in LBAs): 1048576 (4GiB) 00:07:39.904 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.904 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.904 Thin Provisioning: Not Supported 00:07:39.904 Per-NS Atomic Units: No 00:07:39.904 Maximum Single Source Range Length: 128 00:07:39.904 Maximum Copy Length: 128 00:07:39.904 Maximum Source Range Count: 128 00:07:39.904 NGUID/EUI64 Never Reused: No 00:07:39.904 Namespace Write Protected: No 00:07:39.904 Number of LBA Formats: 8 00:07:39.904 Current LBA Format: LBA Format #04 00:07:39.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.904 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.904 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.904 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.904 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.904 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.904 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.904 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.904 00:07:39.904 NVM Specific Namespace Data 00:07:39.904 =========================== 00:07:39.904 Logical Block Storage Tag Mask: 0 00:07:39.904 Protection Information Capabilities: 00:07:39.904 16b Guard Protection Information Storage Tag Support: No 00:07:39.904 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.904 Storage Tag Check Read Support: No 00:07:39.904 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Namespace ID:3 00:07:39.904 Error Recovery Timeout: Unlimited 00:07:39.904 Command Set Identifier: NVM (00h) 00:07:39.904 Deallocate: Supported 00:07:39.904 Deallocated/Unwritten Error: Supported 00:07:39.904 Deallocated Read Value: All 0x00 00:07:39.904 Deallocate in Write Zeroes: Not Supported 00:07:39.904 Deallocated Guard Field: 0xFFFF 00:07:39.904 Flush: Supported 00:07:39.904 Reservation: Not Supported 00:07:39.904 Namespace Sharing Capabilities: Private 00:07:39.904 Size (in LBAs): 1048576 (4GiB) 00:07:39.904 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.904 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.904 Thin Provisioning: Not Supported 00:07:39.904 Per-NS Atomic Units: No 00:07:39.904 Maximum Single Source Range Length: 128 00:07:39.904 Maximum Copy Length: 128 00:07:39.904 Maximum Source Range Count: 128 00:07:39.904 NGUID/EUI64 Never Reused: No 00:07:39.904 Namespace Write Protected: No 00:07:39.904 Number of LBA Formats: 8 00:07:39.904 Current LBA Format: LBA Format #04 00:07:39.904 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.904 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.904 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.904 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.904 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.904 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.904 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.904 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.904 00:07:39.904 NVM Specific Namespace Data 00:07:39.904 =========================== 00:07:39.904 Logical Block Storage Tag Mask: 0 00:07:39.904 Protection Information Capabilities: 00:07:39.904 16b Guard Protection Information Storage Tag Support: No 00:07:39.904 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.904 Storage Tag Check Read Support: No 00:07:39.904 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.904 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.904 10:06:45 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:40.162 ===================================================== 00:07:40.162 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.162 ===================================================== 00:07:40.162 Controller Capabilities/Features 00:07:40.162 ================================ 00:07:40.162 Vendor ID: 1b36 00:07:40.162 Subsystem Vendor ID: 1af4 00:07:40.162 Serial Number: 12343 00:07:40.162 Model Number: QEMU NVMe Ctrl 00:07:40.162 Firmware Version: 8.0.0 00:07:40.162 Recommended Arb Burst: 6 00:07:40.162 IEEE OUI Identifier: 00 54 52 00:07:40.162 Multi-path I/O 00:07:40.162 May have multiple subsystem ports: No 00:07:40.162 May have multiple controllers: Yes 00:07:40.162 Associated with SR-IOV VF: No 00:07:40.162 Max Data Transfer Size: 524288 00:07:40.162 Max Number of Namespaces: 256 00:07:40.162 Max Number of I/O Queues: 64 00:07:40.162 NVMe Specification Version (VS): 1.4 00:07:40.162 NVMe Specification Version (Identify): 1.4 00:07:40.162 Maximum Queue Entries: 2048 00:07:40.162 Contiguous Queues Required: Yes 00:07:40.162 Arbitration Mechanisms Supported 00:07:40.162 Weighted Round Robin: Not Supported 00:07:40.162 Vendor Specific: Not Supported 00:07:40.163 Reset Timeout: 7500 ms 00:07:40.163 Doorbell Stride: 4 bytes 00:07:40.163 NVM Subsystem Reset: Not Supported 00:07:40.163 Command Sets Supported 00:07:40.163 NVM Command Set: Supported 00:07:40.163 Boot Partition: Not Supported 00:07:40.163 Memory Page Size Minimum: 4096 bytes 00:07:40.163 Memory Page Size Maximum: 65536 bytes 00:07:40.163 Persistent Memory Region: Not Supported 00:07:40.163 Optional Asynchronous Events Supported 00:07:40.163 Namespace Attribute Notices: Supported 00:07:40.163 Firmware Activation Notices: Not Supported 00:07:40.163 ANA Change Notices: Not Supported 00:07:40.163 PLE Aggregate Log Change Notices: Not Supported 00:07:40.163 LBA Status Info Alert Notices: Not Supported 00:07:40.163 EGE Aggregate Log Change Notices: Not Supported 00:07:40.163 Normal NVM Subsystem Shutdown event: Not Supported 00:07:40.163 Zone Descriptor Change Notices: Not Supported 00:07:40.163 Discovery Log Change Notices: Not Supported 00:07:40.163 Controller Attributes 00:07:40.163 128-bit Host Identifier: Not Supported 00:07:40.163 Non-Operational Permissive Mode: Not Supported 00:07:40.163 NVM Sets: Not Supported 00:07:40.163 Read Recovery Levels: Not Supported 00:07:40.163 Endurance Groups: Supported 00:07:40.163 Predictable Latency Mode: Not Supported 00:07:40.163 Traffic Based Keep ALive: Not Supported 00:07:40.163 Namespace Granularity: Not Supported 00:07:40.163 SQ Associations: Not Supported 00:07:40.163 UUID List: Not Supported 00:07:40.163 Multi-Domain Subsystem: Not Supported 00:07:40.163 Fixed Capacity Management: Not Supported 00:07:40.163 Variable Capacity Management: Not Supported 00:07:40.163 Delete Endurance Group: Not Supported 00:07:40.163 Delete NVM Set: Not Supported 00:07:40.163 Extended LBA Formats Supported: Supported 00:07:40.163 Flexible Data Placement Supported: Supported 00:07:40.163 00:07:40.163 Controller Memory Buffer Support 00:07:40.163 ================================ 00:07:40.163 Supported: No 00:07:40.163 00:07:40.163 Persistent Memory Region Support 00:07:40.163 ================================ 00:07:40.163 Supported: No 00:07:40.163 00:07:40.163 Admin Command Set Attributes 00:07:40.163 ============================ 00:07:40.163 Security Send/Receive: Not Supported 00:07:40.163 Format NVM: Supported 00:07:40.163 Firmware Activate/Download: Not Supported 00:07:40.163 Namespace Management: Supported 00:07:40.163 Device Self-Test: Not Supported 00:07:40.163 Directives: Supported 00:07:40.163 NVMe-MI: Not Supported 00:07:40.163 Virtualization Management: Not Supported 00:07:40.163 Doorbell Buffer Config: Supported 00:07:40.163 Get LBA Status Capability: Not Supported 00:07:40.163 Command & Feature Lockdown Capability: Not Supported 00:07:40.163 Abort Command Limit: 4 00:07:40.163 Async Event Request Limit: 4 00:07:40.163 Number of Firmware Slots: N/A 00:07:40.163 Firmware Slot 1 Read-Only: N/A 00:07:40.163 Firmware Activation Without Reset: N/A 00:07:40.163 Multiple Update Detection Support: N/A 00:07:40.163 Firmware Update Granularity: No Information Provided 00:07:40.163 Per-Namespace SMART Log: Yes 00:07:40.163 Asymmetric Namespace Access Log Page: Not Supported 00:07:40.163 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:40.163 Command Effects Log Page: Supported 00:07:40.163 Get Log Page Extended Data: Supported 00:07:40.163 Telemetry Log Pages: Not Supported 00:07:40.163 Persistent Event Log Pages: Not Supported 00:07:40.163 Supported Log Pages Log Page: May Support 00:07:40.163 Commands Supported & Effects Log Page: Not Supported 00:07:40.163 Feature Identifiers & Effects Log Page:May Support 00:07:40.163 NVMe-MI Commands & Effects Log Page: May Support 00:07:40.163 Data Area 4 for Telemetry Log: Not Supported 00:07:40.163 Error Log Page Entries Supported: 1 00:07:40.163 Keep Alive: Not Supported 00:07:40.163 00:07:40.163 NVM Command Set Attributes 00:07:40.163 ========================== 00:07:40.163 Submission Queue Entry Size 00:07:40.163 Max: 64 00:07:40.163 Min: 64 00:07:40.163 Completion Queue Entry Size 00:07:40.163 Max: 16 00:07:40.163 Min: 16 00:07:40.163 Number of Namespaces: 256 00:07:40.163 Compare Command: Supported 00:07:40.163 Write Uncorrectable Command: Not Supported 00:07:40.163 Dataset Management Command: Supported 00:07:40.163 Write Zeroes Command: Supported 00:07:40.163 Set Features Save Field: Supported 00:07:40.163 Reservations: Not Supported 00:07:40.163 Timestamp: Supported 00:07:40.163 Copy: Supported 00:07:40.163 Volatile Write Cache: Present 00:07:40.163 Atomic Write Unit (Normal): 1 00:07:40.163 Atomic Write Unit (PFail): 1 00:07:40.163 Atomic Compare & Write Unit: 1 00:07:40.163 Fused Compare & Write: Not Supported 00:07:40.163 Scatter-Gather List 00:07:40.163 SGL Command Set: Supported 00:07:40.163 SGL Keyed: Not Supported 00:07:40.163 SGL Bit Bucket Descriptor: Not Supported 00:07:40.163 SGL Metadata Pointer: Not Supported 00:07:40.163 Oversized SGL: Not Supported 00:07:40.163 SGL Metadata Address: Not Supported 00:07:40.163 SGL Offset: Not Supported 00:07:40.163 Transport SGL Data Block: Not Supported 00:07:40.163 Replay Protected Memory Block: Not Supported 00:07:40.163 00:07:40.163 Firmware Slot Information 00:07:40.163 ========================= 00:07:40.163 Active slot: 1 00:07:40.163 Slot 1 Firmware Revision: 1.0 00:07:40.163 00:07:40.163 00:07:40.163 Commands Supported and Effects 00:07:40.163 ============================== 00:07:40.163 Admin Commands 00:07:40.163 -------------- 00:07:40.163 Delete I/O Submission Queue (00h): Supported 00:07:40.163 Create I/O Submission Queue (01h): Supported 00:07:40.163 Get Log Page (02h): Supported 00:07:40.163 Delete I/O Completion Queue (04h): Supported 00:07:40.163 Create I/O Completion Queue (05h): Supported 00:07:40.163 Identify (06h): Supported 00:07:40.163 Abort (08h): Supported 00:07:40.163 Set Features (09h): Supported 00:07:40.163 Get Features (0Ah): Supported 00:07:40.163 Asynchronous Event Request (0Ch): Supported 00:07:40.163 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:40.163 Directive Send (19h): Supported 00:07:40.163 Directive Receive (1Ah): Supported 00:07:40.163 Virtualization Management (1Ch): Supported 00:07:40.163 Doorbell Buffer Config (7Ch): Supported 00:07:40.163 Format NVM (80h): Supported LBA-Change 00:07:40.163 I/O Commands 00:07:40.163 ------------ 00:07:40.163 Flush (00h): Supported LBA-Change 00:07:40.163 Write (01h): Supported LBA-Change 00:07:40.163 Read (02h): Supported 00:07:40.163 Compare (05h): Supported 00:07:40.163 Write Zeroes (08h): Supported LBA-Change 00:07:40.163 Dataset Management (09h): Supported LBA-Change 00:07:40.163 Unknown (0Ch): Supported 00:07:40.163 Unknown (12h): Supported 00:07:40.163 Copy (19h): Supported LBA-Change 00:07:40.163 Unknown (1Dh): Supported LBA-Change 00:07:40.163 00:07:40.163 Error Log 00:07:40.163 ========= 00:07:40.163 00:07:40.163 Arbitration 00:07:40.163 =========== 00:07:40.163 Arbitration Burst: no limit 00:07:40.163 00:07:40.163 Power Management 00:07:40.163 ================ 00:07:40.163 Number of Power States: 1 00:07:40.163 Current Power State: Power State #0 00:07:40.163 Power State #0: 00:07:40.163 Max Power: 25.00 W 00:07:40.163 Non-Operational State: Operational 00:07:40.163 Entry Latency: 16 microseconds 00:07:40.163 Exit Latency: 4 microseconds 00:07:40.163 Relative Read Throughput: 0 00:07:40.163 Relative Read Latency: 0 00:07:40.163 Relative Write Throughput: 0 00:07:40.163 Relative Write Latency: 0 00:07:40.163 Idle Power: Not Reported 00:07:40.163 Active Power: Not Reported 00:07:40.163 Non-Operational Permissive Mode: Not Supported 00:07:40.163 00:07:40.163 Health Information 00:07:40.163 ================== 00:07:40.163 Critical Warnings: 00:07:40.163 Available Spare Space: OK 00:07:40.163 Temperature: OK 00:07:40.163 Device Reliability: OK 00:07:40.163 Read Only: No 00:07:40.163 Volatile Memory Backup: OK 00:07:40.163 Current Temperature: 323 Kelvin (50 Celsius) 00:07:40.163 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:40.163 Available Spare: 0% 00:07:40.163 Available Spare Threshold: 0% 00:07:40.163 Life Percentage Used: 0% 00:07:40.163 Data Units Read: 724 00:07:40.163 Data Units Written: 653 00:07:40.163 Host Read Commands: 33453 00:07:40.163 Host Write Commands: 32876 00:07:40.163 Controller Busy Time: 0 minutes 00:07:40.163 Power Cycles: 0 00:07:40.163 Power On Hours: 0 hours 00:07:40.163 Unsafe Shutdowns: 0 00:07:40.163 Unrecoverable Media Errors: 0 00:07:40.163 Lifetime Error Log Entries: 0 00:07:40.163 Warning Temperature Time: 0 minutes 00:07:40.164 Critical Temperature Time: 0 minutes 00:07:40.164 00:07:40.164 Number of Queues 00:07:40.164 ================ 00:07:40.164 Number of I/O Submission Queues: 64 00:07:40.164 Number of I/O Completion Queues: 64 00:07:40.164 00:07:40.164 ZNS Specific Controller Data 00:07:40.164 ============================ 00:07:40.164 Zone Append Size Limit: 0 00:07:40.164 00:07:40.164 00:07:40.164 Active Namespaces 00:07:40.164 ================= 00:07:40.164 Namespace ID:1 00:07:40.164 Error Recovery Timeout: Unlimited 00:07:40.164 Command Set Identifier: NVM (00h) 00:07:40.164 Deallocate: Supported 00:07:40.164 Deallocated/Unwritten Error: Supported 00:07:40.164 Deallocated Read Value: All 0x00 00:07:40.164 Deallocate in Write Zeroes: Not Supported 00:07:40.164 Deallocated Guard Field: 0xFFFF 00:07:40.164 Flush: Supported 00:07:40.164 Reservation: Not Supported 00:07:40.164 Namespace Sharing Capabilities: Multiple Controllers 00:07:40.164 Size (in LBAs): 262144 (1GiB) 00:07:40.164 Capacity (in LBAs): 262144 (1GiB) 00:07:40.164 Utilization (in LBAs): 262144 (1GiB) 00:07:40.164 Thin Provisioning: Not Supported 00:07:40.164 Per-NS Atomic Units: No 00:07:40.164 Maximum Single Source Range Length: 128 00:07:40.164 Maximum Copy Length: 128 00:07:40.164 Maximum Source Range Count: 128 00:07:40.164 NGUID/EUI64 Never Reused: No 00:07:40.164 Namespace Write Protected: No 00:07:40.164 Endurance group ID: 1 00:07:40.164 Number of LBA Formats: 8 00:07:40.164 Current LBA Format: LBA Format #04 00:07:40.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:40.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:40.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:40.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:40.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:40.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:40.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:40.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:40.164 00:07:40.164 Get Feature FDP: 00:07:40.164 ================ 00:07:40.164 Enabled: Yes 00:07:40.164 FDP configuration index: 0 00:07:40.164 00:07:40.164 FDP configurations log page 00:07:40.164 =========================== 00:07:40.164 Number of FDP configurations: 1 00:07:40.164 Version: 0 00:07:40.164 Size: 112 00:07:40.164 FDP Configuration Descriptor: 0 00:07:40.164 Descriptor Size: 96 00:07:40.164 Reclaim Group Identifier format: 2 00:07:40.164 FDP Volatile Write Cache: Not Present 00:07:40.164 FDP Configuration: Valid 00:07:40.164 Vendor Specific Size: 0 00:07:40.164 Number of Reclaim Groups: 2 00:07:40.164 Number of Recalim Unit Handles: 8 00:07:40.164 Max Placement Identifiers: 128 00:07:40.164 Number of Namespaces Suppprted: 256 00:07:40.164 Reclaim unit Nominal Size: 6000000 bytes 00:07:40.164 Estimated Reclaim Unit Time Limit: Not Reported 00:07:40.164 RUH Desc #000: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #001: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #002: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #003: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #004: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #005: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #006: RUH Type: Initially Isolated 00:07:40.164 RUH Desc #007: RUH Type: Initially Isolated 00:07:40.164 00:07:40.164 FDP reclaim unit handle usage log page 00:07:40.164 ====================================== 00:07:40.164 Number of Reclaim Unit Handles: 8 00:07:40.164 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:40.164 RUH Usage Desc #001: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #002: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #003: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #004: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #005: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #006: RUH Attributes: Unused 00:07:40.164 RUH Usage Desc #007: RUH Attributes: Unused 00:07:40.164 00:07:40.164 FDP statistics log page 00:07:40.164 ======================= 00:07:40.164 Host bytes with metadata written: 382836736 00:07:40.164 Media bytes with metadata written: 382877696 00:07:40.164 Media bytes erased: 0 00:07:40.164 00:07:40.164 FDP events log page 00:07:40.164 =================== 00:07:40.164 Number of FDP events: 0 00:07:40.164 00:07:40.164 NVM Specific Namespace Data 00:07:40.164 =========================== 00:07:40.164 Logical Block Storage Tag Mask: 0 00:07:40.164 Protection Information Capabilities: 00:07:40.164 16b Guard Protection Information Storage Tag Support: No 00:07:40.164 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:40.164 Storage Tag Check Read Support: No 00:07:40.164 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:40.164 00:07:40.164 real 0m1.197s 00:07:40.164 user 0m0.437s 00:07:40.164 sys 0m0.550s 00:07:40.164 10:06:46 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.164 10:06:46 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:40.164 ************************************ 00:07:40.164 END TEST nvme_identify 00:07:40.164 ************************************ 00:07:40.164 10:06:46 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:40.164 10:06:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.164 10:06:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.164 10:06:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:40.164 ************************************ 00:07:40.164 START TEST nvme_perf 00:07:40.164 ************************************ 00:07:40.164 10:06:46 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:40.164 10:06:46 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:41.541 Initializing NVMe Controllers 00:07:41.541 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:41.541 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:41.541 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:41.541 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:41.541 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:41.541 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:41.541 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:41.541 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:41.541 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:41.541 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:41.541 Initialization complete. Launching workers. 00:07:41.541 ======================================================== 00:07:41.541 Latency(us) 00:07:41.541 Device Information : IOPS MiB/s Average min max 00:07:41.541 PCIE (0000:00:10.0) NSID 1 from core 0: 9111.34 106.77 14093.50 5374.26 38468.97 00:07:41.541 PCIE (0000:00:11.0) NSID 1 from core 0: 9111.34 106.77 14082.07 5455.42 38673.24 00:07:41.541 PCIE (0000:00:13.0) NSID 1 from core 0: 9111.34 106.77 14065.68 5498.15 39592.96 00:07:41.541 PCIE (0000:00:12.0) NSID 1 from core 0: 9111.34 106.77 14048.91 5493.79 38534.61 00:07:41.541 PCIE (0000:00:12.0) NSID 2 from core 0: 9111.34 106.77 14031.38 5506.13 37941.66 00:07:41.541 PCIE (0000:00:12.0) NSID 3 from core 0: 9175.05 107.52 13917.01 5464.29 25788.66 00:07:41.542 ======================================================== 00:07:41.542 Total : 54731.73 641.39 14039.62 5374.26 39592.96 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5620.972us 00:07:41.542 10.00000% : 6326.745us 00:07:41.542 25.00000% : 12199.778us 00:07:41.542 50.00000% : 15022.868us 00:07:41.542 75.00000% : 17039.360us 00:07:41.542 90.00000% : 18450.905us 00:07:41.542 95.00000% : 19358.326us 00:07:41.542 98.00000% : 20669.046us 00:07:41.542 99.00000% : 29642.437us 00:07:41.542 99.50000% : 37305.108us 00:07:41.542 99.90000% : 38313.354us 00:07:41.542 99.99000% : 38515.003us 00:07:41.542 99.99900% : 38515.003us 00:07:41.542 99.99990% : 38515.003us 00:07:41.542 99.99999% : 38515.003us 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5671.385us 00:07:41.542 10.00000% : 6276.332us 00:07:41.542 25.00000% : 12300.603us 00:07:41.542 50.00000% : 15022.868us 00:07:41.542 75.00000% : 17039.360us 00:07:41.542 90.00000% : 18350.080us 00:07:41.542 95.00000% : 19660.800us 00:07:41.542 98.00000% : 20366.572us 00:07:41.542 99.00000% : 28230.892us 00:07:41.542 99.50000% : 37708.406us 00:07:41.542 99.90000% : 38515.003us 00:07:41.542 99.99000% : 38716.652us 00:07:41.542 99.99900% : 38716.652us 00:07:41.542 99.99990% : 38716.652us 00:07:41.542 99.99999% : 38716.652us 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5696.591us 00:07:41.542 10.00000% : 6276.332us 00:07:41.542 25.00000% : 12351.015us 00:07:41.542 50.00000% : 15325.342us 00:07:41.542 75.00000% : 16736.886us 00:07:41.542 90.00000% : 18249.255us 00:07:41.542 95.00000% : 19055.852us 00:07:41.542 98.00000% : 20164.923us 00:07:41.542 99.00000% : 27827.594us 00:07:41.542 99.50000% : 38111.705us 00:07:41.542 99.90000% : 39321.600us 00:07:41.542 99.99000% : 39724.898us 00:07:41.542 99.99900% : 39724.898us 00:07:41.542 99.99990% : 39724.898us 00:07:41.542 99.99999% : 39724.898us 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5696.591us 00:07:41.542 10.00000% : 6276.332us 00:07:41.542 25.00000% : 12351.015us 00:07:41.542 50.00000% : 15224.517us 00:07:41.542 75.00000% : 16837.711us 00:07:41.542 90.00000% : 18350.080us 00:07:41.542 95.00000% : 19055.852us 00:07:41.542 98.00000% : 20366.572us 00:07:41.542 99.00000% : 26214.400us 00:07:41.542 99.50000% : 37103.458us 00:07:41.542 99.90000% : 38313.354us 00:07:41.542 99.99000% : 38716.652us 00:07:41.542 99.99900% : 38716.652us 00:07:41.542 99.99990% : 38716.652us 00:07:41.542 99.99999% : 38716.652us 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5671.385us 00:07:41.542 10.00000% : 6276.332us 00:07:41.542 25.00000% : 12250.191us 00:07:41.542 50.00000% : 15123.692us 00:07:41.542 75.00000% : 16938.535us 00:07:41.542 90.00000% : 18350.080us 00:07:41.542 95.00000% : 19257.502us 00:07:41.542 98.00000% : 20467.397us 00:07:41.542 99.00000% : 24702.031us 00:07:41.542 99.50000% : 36296.862us 00:07:41.542 99.90000% : 37708.406us 00:07:41.542 99.99000% : 38111.705us 00:07:41.542 99.99900% : 38111.705us 00:07:41.542 99.99990% : 38111.705us 00:07:41.542 99.99999% : 38111.705us 00:07:41.542 00:07:41.542 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:41.542 ================================================================================= 00:07:41.542 1.00000% : 5671.385us 00:07:41.542 10.00000% : 6301.538us 00:07:41.542 25.00000% : 12300.603us 00:07:41.542 50.00000% : 15123.692us 00:07:41.542 75.00000% : 17039.360us 00:07:41.542 90.00000% : 18249.255us 00:07:41.542 95.00000% : 19055.852us 00:07:41.542 98.00000% : 19963.274us 00:07:41.542 99.00000% : 20467.397us 00:07:41.542 99.50000% : 24702.031us 00:07:41.542 99.90000% : 25609.452us 00:07:41.542 99.99000% : 25811.102us 00:07:41.542 99.99900% : 25811.102us 00:07:41.542 99.99990% : 25811.102us 00:07:41.542 99.99999% : 25811.102us 00:07:41.542 00:07:41.542 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:41.542 ============================================================================== 00:07:41.542 Range in us Cumulative IO count 00:07:41.542 5368.911 - 5394.117: 0.0219% ( 2) 00:07:41.542 5394.117 - 5419.323: 0.0328% ( 1) 00:07:41.542 5419.323 - 5444.529: 0.1093% ( 7) 00:07:41.542 5444.529 - 5469.735: 0.2513% ( 13) 00:07:41.542 5469.735 - 5494.942: 0.3169% ( 6) 00:07:41.542 5494.942 - 5520.148: 0.4261% ( 10) 00:07:41.542 5520.148 - 5545.354: 0.6010% ( 16) 00:07:41.542 5545.354 - 5570.560: 0.8086% ( 19) 00:07:41.542 5570.560 - 5595.766: 0.9943% ( 17) 00:07:41.542 5595.766 - 5620.972: 1.1691% ( 16) 00:07:41.542 5620.972 - 5646.178: 1.4642% ( 27) 00:07:41.542 5646.178 - 5671.385: 1.7155% ( 23) 00:07:41.542 5671.385 - 5696.591: 2.0433% ( 30) 00:07:41.542 5696.591 - 5721.797: 2.3492% ( 28) 00:07:41.542 5721.797 - 5747.003: 2.5787% ( 21) 00:07:41.542 5747.003 - 5772.209: 2.9174% ( 31) 00:07:41.542 5772.209 - 5797.415: 3.2015% ( 26) 00:07:41.542 5797.415 - 5822.622: 3.4856% ( 26) 00:07:41.542 5822.622 - 5847.828: 3.8352% ( 32) 00:07:41.542 5847.828 - 5873.034: 4.0975% ( 24) 00:07:41.542 5873.034 - 5898.240: 4.4143% ( 29) 00:07:41.542 5898.240 - 5923.446: 4.6875% ( 25) 00:07:41.542 5923.446 - 5948.652: 5.0699% ( 35) 00:07:41.542 5948.652 - 5973.858: 5.4196% ( 32) 00:07:41.542 5973.858 - 5999.065: 5.7911% ( 34) 00:07:41.542 5999.065 - 6024.271: 6.1189% ( 30) 00:07:41.542 6024.271 - 6049.477: 6.4467% ( 30) 00:07:41.542 6049.477 - 6074.683: 6.8291% ( 35) 00:07:41.542 6074.683 - 6099.889: 7.0913% ( 24) 00:07:41.542 6099.889 - 6125.095: 7.5175% ( 39) 00:07:41.542 6125.095 - 6150.302: 7.7797% ( 24) 00:07:41.542 6150.302 - 6175.508: 8.1949% ( 38) 00:07:41.542 6175.508 - 6200.714: 8.5227% ( 30) 00:07:41.542 6200.714 - 6225.920: 8.8396% ( 29) 00:07:41.542 6225.920 - 6251.126: 9.2002% ( 33) 00:07:41.542 6251.126 - 6276.332: 9.5389% ( 31) 00:07:41.542 6276.332 - 6301.538: 9.9213% ( 35) 00:07:41.542 6301.538 - 6326.745: 10.2491% ( 30) 00:07:41.542 6326.745 - 6351.951: 10.7190% ( 43) 00:07:41.542 6351.951 - 6377.157: 11.0140% ( 27) 00:07:41.542 6377.157 - 6402.363: 11.3746% ( 33) 00:07:41.542 6402.363 - 6427.569: 11.7242% ( 32) 00:07:41.542 6427.569 - 6452.775: 12.0083% ( 26) 00:07:41.542 6452.775 - 6503.188: 12.5765% ( 52) 00:07:41.542 6503.188 - 6553.600: 13.0354% ( 42) 00:07:41.542 6553.600 - 6604.012: 13.4725% ( 40) 00:07:41.542 6604.012 - 6654.425: 13.8003% ( 30) 00:07:41.542 6654.425 - 6704.837: 14.0625% ( 24) 00:07:41.542 6704.837 - 6755.249: 14.2264% ( 15) 00:07:41.542 6755.249 - 6805.662: 14.4886% ( 24) 00:07:41.542 6805.662 - 6856.074: 14.7290% ( 22) 00:07:41.542 6856.074 - 6906.486: 14.9694% ( 22) 00:07:41.542 6906.486 - 6956.898: 15.1552% ( 17) 00:07:41.542 6956.898 - 7007.311: 15.3518% ( 18) 00:07:41.542 7007.311 - 7057.723: 15.5485% ( 18) 00:07:41.542 7057.723 - 7108.135: 15.7343% ( 17) 00:07:41.542 7108.135 - 7158.548: 15.9309% ( 18) 00:07:41.542 7158.548 - 7208.960: 16.1276% ( 18) 00:07:41.542 7208.960 - 7259.372: 16.3243% ( 18) 00:07:41.542 7259.372 - 7309.785: 16.4991% ( 16) 00:07:41.542 7309.785 - 7360.197: 16.6521% ( 14) 00:07:41.542 7360.197 - 7410.609: 16.8378% ( 17) 00:07:41.542 7410.609 - 7461.022: 16.9908% ( 14) 00:07:41.542 7461.022 - 7511.434: 17.1656% ( 16) 00:07:41.542 7511.434 - 7561.846: 17.2858% ( 11) 00:07:41.542 7561.846 - 7612.258: 17.4279% ( 13) 00:07:41.542 7612.258 - 7662.671: 17.5590% ( 12) 00:07:41.542 7662.671 - 7713.083: 17.7229% ( 15) 00:07:41.542 7713.083 - 7763.495: 17.8868% ( 15) 00:07:41.542 7763.495 - 7813.908: 17.9851% ( 9) 00:07:41.542 7813.908 - 7864.320: 18.1272% ( 13) 00:07:41.542 7864.320 - 7914.732: 18.2583% ( 12) 00:07:41.542 7914.732 - 7965.145: 18.4222% ( 15) 00:07:41.542 7965.145 - 8015.557: 18.4878% ( 6) 00:07:41.542 8015.557 - 8065.969: 18.5315% ( 4) 00:07:41.542 8065.969 - 8116.382: 18.6735% ( 13) 00:07:41.542 8116.382 - 8166.794: 18.7937% ( 11) 00:07:41.542 8166.794 - 8217.206: 18.8702% ( 7) 00:07:41.542 8217.206 - 8267.618: 18.9685% ( 9) 00:07:41.542 8267.618 - 8318.031: 19.0450% ( 7) 00:07:41.542 8318.031 - 8368.443: 19.1324% ( 8) 00:07:41.542 8368.443 - 8418.855: 19.2308% ( 9) 00:07:41.542 8418.855 - 8469.268: 19.3073% ( 7) 00:07:41.542 8469.268 - 8519.680: 19.3947% ( 8) 00:07:41.542 8519.680 - 8570.092: 19.4821% ( 8) 00:07:41.542 8570.092 - 8620.505: 19.5476% ( 6) 00:07:41.542 8620.505 - 8670.917: 19.6460% ( 9) 00:07:41.542 8670.917 - 8721.329: 19.7225% ( 7) 00:07:41.543 8721.329 - 8771.742: 19.8208% ( 9) 00:07:41.543 8771.742 - 8822.154: 19.8973% ( 7) 00:07:41.543 8822.154 - 8872.566: 19.9956% ( 9) 00:07:41.543 8872.566 - 8922.978: 20.0612% ( 6) 00:07:41.543 8922.978 - 8973.391: 20.1814% ( 11) 00:07:41.543 8973.391 - 9023.803: 20.2251% ( 4) 00:07:41.543 9023.803 - 9074.215: 20.3016% ( 7) 00:07:41.543 9074.215 - 9124.628: 20.3562% ( 5) 00:07:41.543 9124.628 - 9175.040: 20.3890% ( 3) 00:07:41.543 9175.040 - 9225.452: 20.4545% ( 6) 00:07:41.543 9225.452 - 9275.865: 20.5201% ( 6) 00:07:41.543 9275.865 - 9326.277: 20.5857% ( 6) 00:07:41.543 9326.277 - 9376.689: 20.6403% ( 5) 00:07:41.543 9376.689 - 9427.102: 20.6840% ( 4) 00:07:41.543 9427.102 - 9477.514: 20.7277% ( 4) 00:07:41.543 9477.514 - 9527.926: 20.7496% ( 2) 00:07:41.543 9527.926 - 9578.338: 20.7933% ( 4) 00:07:41.543 9578.338 - 9628.751: 20.8260% ( 3) 00:07:41.543 9628.751 - 9679.163: 20.8479% ( 2) 00:07:41.543 9679.163 - 9729.575: 20.9025% ( 5) 00:07:41.543 9729.575 - 9779.988: 20.9244% ( 2) 00:07:41.543 9779.988 - 9830.400: 20.9681% ( 4) 00:07:41.543 9830.400 - 9880.812: 21.0009% ( 3) 00:07:41.543 9880.812 - 9931.225: 21.0337% ( 3) 00:07:41.543 9931.225 - 9981.637: 21.0774% ( 4) 00:07:41.543 9981.637 - 10032.049: 21.1101% ( 3) 00:07:41.543 10032.049 - 10082.462: 21.1429% ( 3) 00:07:41.543 10082.462 - 10132.874: 21.1648% ( 2) 00:07:41.543 10132.874 - 10183.286: 21.1866% ( 2) 00:07:41.543 10183.286 - 10233.698: 21.1976% ( 1) 00:07:41.543 10233.698 - 10284.111: 21.2085% ( 1) 00:07:41.543 10284.111 - 10334.523: 21.2413% ( 3) 00:07:41.543 10334.523 - 10384.935: 21.2522% ( 1) 00:07:41.543 10384.935 - 10435.348: 21.2850% ( 3) 00:07:41.543 10435.348 - 10485.760: 21.3068% ( 2) 00:07:41.543 10485.760 - 10536.172: 21.3287% ( 2) 00:07:41.543 10536.172 - 10586.585: 21.3505% ( 2) 00:07:41.543 10586.585 - 10636.997: 21.3833% ( 3) 00:07:41.543 10636.997 - 10687.409: 21.4052% ( 2) 00:07:41.543 10687.409 - 10737.822: 21.4270% ( 2) 00:07:41.543 10737.822 - 10788.234: 21.4379% ( 1) 00:07:41.543 10788.234 - 10838.646: 21.4598% ( 2) 00:07:41.543 10838.646 - 10889.058: 21.5363% ( 7) 00:07:41.543 10889.058 - 10939.471: 21.6128% ( 7) 00:07:41.543 10939.471 - 10989.883: 21.7220% ( 10) 00:07:41.543 10989.883 - 11040.295: 21.7767% ( 5) 00:07:41.543 11040.295 - 11090.708: 21.8531% ( 7) 00:07:41.543 11090.708 - 11141.120: 21.8969% ( 4) 00:07:41.543 11141.120 - 11191.532: 21.9843% ( 8) 00:07:41.543 11191.532 - 11241.945: 22.0826% ( 9) 00:07:41.543 11241.945 - 11292.357: 22.2137% ( 12) 00:07:41.543 11292.357 - 11342.769: 22.2902% ( 7) 00:07:41.543 11342.769 - 11393.182: 22.4760% ( 17) 00:07:41.543 11393.182 - 11443.594: 22.5524% ( 7) 00:07:41.543 11443.594 - 11494.006: 22.6836% ( 12) 00:07:41.543 11494.006 - 11544.418: 22.7163% ( 3) 00:07:41.543 11544.418 - 11594.831: 22.8365% ( 11) 00:07:41.543 11594.831 - 11645.243: 22.9567% ( 11) 00:07:41.543 11645.243 - 11695.655: 23.0988% ( 13) 00:07:41.543 11695.655 - 11746.068: 23.1862% ( 8) 00:07:41.543 11746.068 - 11796.480: 23.4047% ( 20) 00:07:41.543 11796.480 - 11846.892: 23.5468% ( 13) 00:07:41.543 11846.892 - 11897.305: 23.7434% ( 18) 00:07:41.543 11897.305 - 11947.717: 23.8964% ( 14) 00:07:41.543 11947.717 - 11998.129: 24.1477% ( 23) 00:07:41.543 11998.129 - 12048.542: 24.3663% ( 20) 00:07:41.543 12048.542 - 12098.954: 24.5629% ( 18) 00:07:41.543 12098.954 - 12149.366: 24.9017% ( 31) 00:07:41.543 12149.366 - 12199.778: 25.0874% ( 17) 00:07:41.543 12199.778 - 12250.191: 25.2950% ( 19) 00:07:41.543 12250.191 - 12300.603: 25.5245% ( 21) 00:07:41.543 12300.603 - 12351.015: 25.7649% ( 22) 00:07:41.543 12351.015 - 12401.428: 26.0052% ( 22) 00:07:41.543 12401.428 - 12451.840: 26.1801% ( 16) 00:07:41.543 12451.840 - 12502.252: 26.4969% ( 29) 00:07:41.543 12502.252 - 12552.665: 26.8247% ( 30) 00:07:41.543 12552.665 - 12603.077: 27.2946% ( 43) 00:07:41.543 12603.077 - 12653.489: 27.5350% ( 22) 00:07:41.543 12653.489 - 12703.902: 27.9283% ( 36) 00:07:41.543 12703.902 - 12754.314: 28.2452% ( 29) 00:07:41.543 12754.314 - 12804.726: 28.5402% ( 27) 00:07:41.543 12804.726 - 12855.138: 28.8899% ( 32) 00:07:41.543 12855.138 - 12905.551: 29.2941% ( 37) 00:07:41.543 12905.551 - 13006.375: 30.0590% ( 70) 00:07:41.543 13006.375 - 13107.200: 30.8894% ( 76) 00:07:41.543 13107.200 - 13208.025: 31.5778% ( 63) 00:07:41.543 13208.025 - 13308.849: 32.3208% ( 68) 00:07:41.543 13308.849 - 13409.674: 33.0310% ( 65) 00:07:41.543 13409.674 - 13510.498: 34.0035% ( 89) 00:07:41.543 13510.498 - 13611.323: 34.7902% ( 72) 00:07:41.543 13611.323 - 13712.148: 35.6862% ( 82) 00:07:41.543 13712.148 - 13812.972: 36.9646% ( 117) 00:07:41.543 13812.972 - 13913.797: 37.6420% ( 62) 00:07:41.543 13913.797 - 14014.622: 38.8330% ( 109) 00:07:41.543 14014.622 - 14115.446: 39.7837% ( 87) 00:07:41.543 14115.446 - 14216.271: 40.9091% ( 103) 00:07:41.543 14216.271 - 14317.095: 42.2531% ( 123) 00:07:41.543 14317.095 - 14417.920: 43.4768% ( 112) 00:07:41.543 14417.920 - 14518.745: 44.6023% ( 103) 00:07:41.543 14518.745 - 14619.569: 45.8916% ( 118) 00:07:41.543 14619.569 - 14720.394: 46.8422% ( 87) 00:07:41.543 14720.394 - 14821.218: 47.8693% ( 94) 00:07:41.543 14821.218 - 14922.043: 48.9292% ( 97) 00:07:41.543 14922.043 - 15022.868: 50.1202% ( 109) 00:07:41.543 15022.868 - 15123.692: 51.2893% ( 107) 00:07:41.543 15123.692 - 15224.517: 52.1525% ( 79) 00:07:41.543 15224.517 - 15325.342: 53.3108% ( 106) 00:07:41.543 15325.342 - 15426.166: 54.5782% ( 116) 00:07:41.543 15426.166 - 15526.991: 55.6709% ( 100) 00:07:41.543 15526.991 - 15627.815: 56.8837% ( 111) 00:07:41.543 15627.815 - 15728.640: 58.1949% ( 120) 00:07:41.543 15728.640 - 15829.465: 59.3313% ( 104) 00:07:41.543 15829.465 - 15930.289: 60.5769% ( 114) 00:07:41.543 15930.289 - 16031.114: 61.7570% ( 108) 00:07:41.543 16031.114 - 16131.938: 62.9808% ( 112) 00:07:41.543 16131.938 - 16232.763: 64.4996% ( 139) 00:07:41.543 16232.763 - 16333.588: 65.8545% ( 124) 00:07:41.543 16333.588 - 16434.412: 67.1219% ( 116) 00:07:41.543 16434.412 - 16535.237: 68.8374% ( 157) 00:07:41.543 16535.237 - 16636.062: 70.4655% ( 149) 00:07:41.543 16636.062 - 16736.886: 71.6783% ( 111) 00:07:41.543 16736.886 - 16837.711: 73.3173% ( 150) 00:07:41.543 16837.711 - 16938.535: 74.4865% ( 107) 00:07:41.543 16938.535 - 17039.360: 75.7430% ( 115) 00:07:41.543 17039.360 - 17140.185: 77.0105% ( 116) 00:07:41.543 17140.185 - 17241.009: 78.2124% ( 110) 00:07:41.543 17241.009 - 17341.834: 79.5236% ( 120) 00:07:41.543 17341.834 - 17442.658: 80.6490% ( 103) 00:07:41.543 17442.658 - 17543.483: 82.0039% ( 124) 00:07:41.543 17543.483 - 17644.308: 83.0529% ( 96) 00:07:41.543 17644.308 - 17745.132: 84.0581% ( 92) 00:07:41.543 17745.132 - 17845.957: 85.0852% ( 94) 00:07:41.543 17845.957 - 17946.782: 86.1560% ( 98) 00:07:41.543 17946.782 - 18047.606: 87.1722% ( 93) 00:07:41.543 18047.606 - 18148.431: 87.9371% ( 70) 00:07:41.543 18148.431 - 18249.255: 88.7784% ( 77) 00:07:41.543 18249.255 - 18350.080: 89.6635% ( 81) 00:07:41.543 18350.080 - 18450.905: 90.4830% ( 75) 00:07:41.543 18450.905 - 18551.729: 91.1167% ( 58) 00:07:41.543 18551.729 - 18652.554: 91.6412% ( 48) 00:07:41.543 18652.554 - 18753.378: 92.2421% ( 55) 00:07:41.543 18753.378 - 18854.203: 92.7229% ( 44) 00:07:41.543 18854.203 - 18955.028: 93.1490% ( 39) 00:07:41.543 18955.028 - 19055.852: 93.7172% ( 52) 00:07:41.543 19055.852 - 19156.677: 94.1652% ( 41) 00:07:41.543 19156.677 - 19257.502: 94.7225% ( 51) 00:07:41.543 19257.502 - 19358.326: 95.1049% ( 35) 00:07:41.543 19358.326 - 19459.151: 95.4108% ( 28) 00:07:41.543 19459.151 - 19559.975: 95.7386% ( 30) 00:07:41.543 19559.975 - 19660.800: 96.0446% ( 28) 00:07:41.543 19660.800 - 19761.625: 96.2959% ( 23) 00:07:41.543 19761.625 - 19862.449: 96.5800% ( 26) 00:07:41.543 19862.449 - 19963.274: 97.0061% ( 39) 00:07:41.543 19963.274 - 20064.098: 97.2465% ( 22) 00:07:41.543 20064.098 - 20164.923: 97.4323% ( 17) 00:07:41.543 20164.923 - 20265.748: 97.5743% ( 13) 00:07:41.543 20265.748 - 20366.572: 97.7601% ( 17) 00:07:41.543 20366.572 - 20467.397: 97.9130% ( 14) 00:07:41.543 20467.397 - 20568.222: 97.9786% ( 6) 00:07:41.543 20568.222 - 20669.046: 98.1206% ( 13) 00:07:41.543 20669.046 - 20769.871: 98.2627% ( 13) 00:07:41.543 20769.871 - 20870.695: 98.3173% ( 5) 00:07:41.543 20870.695 - 20971.520: 98.4047% ( 8) 00:07:41.543 20971.520 - 21072.345: 98.4703% ( 6) 00:07:41.543 21173.169 - 21273.994: 98.6014% ( 12) 00:07:41.544 28432.542 - 28634.191: 98.6670% ( 6) 00:07:41.544 28634.191 - 28835.840: 98.7544% ( 8) 00:07:41.544 28835.840 - 29037.489: 98.8418% ( 8) 00:07:41.544 29037.489 - 29239.138: 98.9292% ( 8) 00:07:41.544 29239.138 - 29440.788: 98.9948% ( 6) 00:07:41.544 29440.788 - 29642.437: 99.1040% ( 10) 00:07:41.544 29642.437 - 29844.086: 99.1805% ( 7) 00:07:41.544 29844.086 - 30045.735: 99.2679% ( 8) 00:07:41.544 30045.735 - 30247.385: 99.3007% ( 3) 00:07:41.544 36700.160 - 36901.809: 99.3335% ( 3) 00:07:41.544 36901.809 - 37103.458: 99.4209% ( 8) 00:07:41.544 37103.458 - 37305.108: 99.5083% ( 8) 00:07:41.544 37305.108 - 37506.757: 99.5848% ( 7) 00:07:41.544 37506.757 - 37708.406: 99.6722% ( 8) 00:07:41.544 37708.406 - 37910.055: 99.7596% ( 8) 00:07:41.544 37910.055 - 38111.705: 99.8470% ( 8) 00:07:41.544 38111.705 - 38313.354: 99.9344% ( 8) 00:07:41.544 38313.354 - 38515.003: 100.0000% ( 6) 00:07:41.544 00:07:41.544 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:41.544 ============================================================================== 00:07:41.544 Range in us Cumulative IO count 00:07:41.544 5444.529 - 5469.735: 0.0219% ( 2) 00:07:41.544 5469.735 - 5494.942: 0.0437% ( 2) 00:07:41.544 5494.942 - 5520.148: 0.1202% ( 7) 00:07:41.544 5520.148 - 5545.354: 0.1967% ( 7) 00:07:41.544 5545.354 - 5570.560: 0.3059% ( 10) 00:07:41.544 5570.560 - 5595.766: 0.4152% ( 10) 00:07:41.544 5595.766 - 5620.972: 0.5900% ( 16) 00:07:41.544 5620.972 - 5646.178: 0.8086% ( 20) 00:07:41.544 5646.178 - 5671.385: 1.0052% ( 18) 00:07:41.544 5671.385 - 5696.591: 1.2784% ( 25) 00:07:41.544 5696.591 - 5721.797: 1.4969% ( 20) 00:07:41.544 5721.797 - 5747.003: 1.8684% ( 34) 00:07:41.544 5747.003 - 5772.209: 2.2290% ( 33) 00:07:41.544 5772.209 - 5797.415: 2.5787% ( 32) 00:07:41.544 5797.415 - 5822.622: 2.9283% ( 32) 00:07:41.544 5822.622 - 5847.828: 3.2998% ( 34) 00:07:41.544 5847.828 - 5873.034: 3.6385% ( 31) 00:07:41.544 5873.034 - 5898.240: 4.0101% ( 34) 00:07:41.544 5898.240 - 5923.446: 4.3706% ( 33) 00:07:41.544 5923.446 - 5948.652: 4.7421% ( 34) 00:07:41.544 5948.652 - 5973.858: 5.1464% ( 37) 00:07:41.544 5973.858 - 5999.065: 5.5288% ( 35) 00:07:41.544 5999.065 - 6024.271: 5.9113% ( 35) 00:07:41.544 6024.271 - 6049.477: 6.3374% ( 39) 00:07:41.544 6049.477 - 6074.683: 6.7526% ( 38) 00:07:41.544 6074.683 - 6099.889: 7.1897% ( 40) 00:07:41.544 6099.889 - 6125.095: 7.5940% ( 37) 00:07:41.544 6125.095 - 6150.302: 7.9983% ( 37) 00:07:41.544 6150.302 - 6175.508: 8.3807% ( 35) 00:07:41.544 6175.508 - 6200.714: 8.8177% ( 40) 00:07:41.544 6200.714 - 6225.920: 9.2330% ( 38) 00:07:41.544 6225.920 - 6251.126: 9.6591% ( 39) 00:07:41.544 6251.126 - 6276.332: 10.0743% ( 38) 00:07:41.544 6276.332 - 6301.538: 10.5004% ( 39) 00:07:41.544 6301.538 - 6326.745: 10.8719% ( 34) 00:07:41.544 6326.745 - 6351.951: 11.2544% ( 35) 00:07:41.544 6351.951 - 6377.157: 11.6149% ( 33) 00:07:41.544 6377.157 - 6402.363: 11.9646% ( 32) 00:07:41.544 6402.363 - 6427.569: 12.3033% ( 31) 00:07:41.544 6427.569 - 6452.775: 12.5874% ( 26) 00:07:41.544 6452.775 - 6503.188: 13.0573% ( 43) 00:07:41.544 6503.188 - 6553.600: 13.4178% ( 33) 00:07:41.544 6553.600 - 6604.012: 13.6473% ( 21) 00:07:41.544 6604.012 - 6654.425: 13.8549% ( 19) 00:07:41.544 6654.425 - 6704.837: 14.0516% ( 18) 00:07:41.544 6704.837 - 6755.249: 14.2701% ( 20) 00:07:41.544 6755.249 - 6805.662: 14.4777% ( 19) 00:07:41.544 6805.662 - 6856.074: 14.6744% ( 18) 00:07:41.544 6856.074 - 6906.486: 14.8383% ( 15) 00:07:41.544 6906.486 - 6956.898: 15.0022% ( 15) 00:07:41.544 6956.898 - 7007.311: 15.1442% ( 13) 00:07:41.544 7007.311 - 7057.723: 15.3191% ( 16) 00:07:41.544 7057.723 - 7108.135: 15.5813% ( 24) 00:07:41.544 7108.135 - 7158.548: 15.8654% ( 26) 00:07:41.544 7158.548 - 7208.960: 16.0730% ( 19) 00:07:41.544 7208.960 - 7259.372: 16.3134% ( 22) 00:07:41.544 7259.372 - 7309.785: 16.5647% ( 23) 00:07:41.544 7309.785 - 7360.197: 16.8051% ( 22) 00:07:41.544 7360.197 - 7410.609: 17.0455% ( 22) 00:07:41.544 7410.609 - 7461.022: 17.2203% ( 16) 00:07:41.544 7461.022 - 7511.434: 17.4279% ( 19) 00:07:41.544 7511.434 - 7561.846: 17.6136% ( 17) 00:07:41.544 7561.846 - 7612.258: 17.7885% ( 16) 00:07:41.544 7612.258 - 7662.671: 17.9742% ( 17) 00:07:41.544 7662.671 - 7713.083: 18.1600% ( 17) 00:07:41.544 7713.083 - 7763.495: 18.3020% ( 13) 00:07:41.544 7763.495 - 7813.908: 18.4331% ( 12) 00:07:41.544 7813.908 - 7864.320: 18.5315% ( 9) 00:07:41.544 7864.320 - 7914.732: 18.6407% ( 10) 00:07:41.544 7914.732 - 7965.145: 18.7172% ( 7) 00:07:41.544 7965.145 - 8015.557: 18.7500% ( 3) 00:07:41.544 8015.557 - 8065.969: 18.8483% ( 9) 00:07:41.544 8065.969 - 8116.382: 18.9139% ( 6) 00:07:41.544 8116.382 - 8166.794: 18.9576% ( 4) 00:07:41.544 8166.794 - 8217.206: 19.0013% ( 4) 00:07:41.544 8217.206 - 8267.618: 19.1543% ( 14) 00:07:41.544 8267.618 - 8318.031: 19.2417% ( 8) 00:07:41.544 8318.031 - 8368.443: 19.2963% ( 5) 00:07:41.544 8368.443 - 8418.855: 19.3510% ( 5) 00:07:41.544 8418.855 - 8469.268: 19.4165% ( 6) 00:07:41.544 8469.268 - 8519.680: 19.4821% ( 6) 00:07:41.544 8519.680 - 8570.092: 19.5476% ( 6) 00:07:41.544 8570.092 - 8620.505: 19.6132% ( 6) 00:07:41.544 8620.505 - 8670.917: 19.6788% ( 6) 00:07:41.544 8670.917 - 8721.329: 19.7334% ( 5) 00:07:41.544 8721.329 - 8771.742: 19.8099% ( 7) 00:07:41.544 8771.742 - 8822.154: 19.8645% ( 5) 00:07:41.544 8822.154 - 8872.566: 19.9301% ( 6) 00:07:41.544 8872.566 - 8922.978: 19.9847% ( 5) 00:07:41.544 8922.978 - 8973.391: 20.0503% ( 6) 00:07:41.544 8973.391 - 9023.803: 20.1158% ( 6) 00:07:41.544 9023.803 - 9074.215: 20.1705% ( 5) 00:07:41.544 9074.215 - 9124.628: 20.2360% ( 6) 00:07:41.544 9124.628 - 9175.040: 20.2688% ( 3) 00:07:41.544 9175.040 - 9225.452: 20.3016% ( 3) 00:07:41.544 9225.452 - 9275.865: 20.3234% ( 2) 00:07:41.544 9275.865 - 9326.277: 20.3453% ( 2) 00:07:41.544 9326.277 - 9376.689: 20.3671% ( 2) 00:07:41.544 9376.689 - 9427.102: 20.4327% ( 6) 00:07:41.544 9427.102 - 9477.514: 20.5092% ( 7) 00:07:41.544 9477.514 - 9527.926: 20.5420% ( 3) 00:07:41.544 9527.926 - 9578.338: 20.5966% ( 5) 00:07:41.544 9578.338 - 9628.751: 20.6294% ( 3) 00:07:41.544 9628.751 - 9679.163: 20.6731% ( 4) 00:07:41.544 9679.163 - 9729.575: 20.7277% ( 5) 00:07:41.544 9729.575 - 9779.988: 20.7714% ( 4) 00:07:41.544 9779.988 - 9830.400: 20.8151% ( 4) 00:07:41.544 9830.400 - 9880.812: 20.8479% ( 3) 00:07:41.544 9880.812 - 9931.225: 20.9025% ( 5) 00:07:41.544 9931.225 - 9981.637: 20.9462% ( 4) 00:07:41.544 9981.637 - 10032.049: 21.0009% ( 5) 00:07:41.544 10032.049 - 10082.462: 21.0446% ( 4) 00:07:41.544 10082.462 - 10132.874: 21.0883% ( 4) 00:07:41.544 10132.874 - 10183.286: 21.1320% ( 4) 00:07:41.544 10183.286 - 10233.698: 21.1866% ( 5) 00:07:41.544 10233.698 - 10284.111: 21.2194% ( 3) 00:07:41.544 10284.111 - 10334.523: 21.2740% ( 5) 00:07:41.544 10334.523 - 10384.935: 21.3177% ( 4) 00:07:41.544 10384.935 - 10435.348: 21.3615% ( 4) 00:07:41.544 10435.348 - 10485.760: 21.4161% ( 5) 00:07:41.544 10485.760 - 10536.172: 21.4707% ( 5) 00:07:41.544 10536.172 - 10586.585: 21.5253% ( 5) 00:07:41.544 10586.585 - 10636.997: 21.5909% ( 6) 00:07:41.544 10636.997 - 10687.409: 21.6565% ( 6) 00:07:41.544 10687.409 - 10737.822: 21.7002% ( 4) 00:07:41.544 10737.822 - 10788.234: 21.7330% ( 3) 00:07:41.544 10788.234 - 10838.646: 21.7767% ( 4) 00:07:41.544 10838.646 - 10889.058: 21.8204% ( 4) 00:07:41.544 10889.058 - 10939.471: 21.8641% ( 4) 00:07:41.544 10939.471 - 10989.883: 21.9296% ( 6) 00:07:41.544 10989.883 - 11040.295: 21.9624% ( 3) 00:07:41.544 11040.295 - 11090.708: 22.0170% ( 5) 00:07:41.544 11090.708 - 11141.120: 22.0717% ( 5) 00:07:41.544 11141.120 - 11191.532: 22.1263% ( 5) 00:07:41.544 11191.532 - 11241.945: 22.1809% ( 5) 00:07:41.544 11241.945 - 11292.357: 22.2356% ( 5) 00:07:41.544 11292.357 - 11342.769: 22.3011% ( 6) 00:07:41.544 11342.769 - 11393.182: 22.3448% ( 4) 00:07:41.544 11393.182 - 11443.594: 22.3995% ( 5) 00:07:41.544 11443.594 - 11494.006: 22.4650% ( 6) 00:07:41.544 11494.006 - 11544.418: 22.5524% ( 8) 00:07:41.544 11544.418 - 11594.831: 22.6508% ( 9) 00:07:41.544 11594.831 - 11645.243: 22.7382% ( 8) 00:07:41.544 11645.243 - 11695.655: 22.8584% ( 11) 00:07:41.544 11695.655 - 11746.068: 22.9677% ( 10) 00:07:41.544 11746.068 - 11796.480: 23.0878% ( 11) 00:07:41.544 11796.480 - 11846.892: 23.1971% ( 10) 00:07:41.544 11846.892 - 11897.305: 23.3610% ( 15) 00:07:41.544 11897.305 - 11947.717: 23.5358% ( 16) 00:07:41.544 11947.717 - 11998.129: 23.7434% ( 19) 00:07:41.544 11998.129 - 12048.542: 23.9510% ( 19) 00:07:41.545 12048.542 - 12098.954: 24.1259% ( 16) 00:07:41.545 12098.954 - 12149.366: 24.3444% ( 20) 00:07:41.545 12149.366 - 12199.778: 24.5629% ( 20) 00:07:41.545 12199.778 - 12250.191: 24.7815% ( 20) 00:07:41.545 12250.191 - 12300.603: 25.0874% ( 28) 00:07:41.545 12300.603 - 12351.015: 25.3934% ( 28) 00:07:41.545 12351.015 - 12401.428: 25.6774% ( 26) 00:07:41.545 12401.428 - 12451.840: 25.9506% ( 25) 00:07:41.545 12451.840 - 12502.252: 26.2238% ( 25) 00:07:41.545 12502.252 - 12552.665: 26.4969% ( 25) 00:07:41.545 12552.665 - 12603.077: 26.7373% ( 22) 00:07:41.545 12603.077 - 12653.489: 26.9886% ( 23) 00:07:41.545 12653.489 - 12703.902: 27.3274% ( 31) 00:07:41.545 12703.902 - 12754.314: 27.6224% ( 27) 00:07:41.545 12754.314 - 12804.726: 27.8846% ( 24) 00:07:41.545 12804.726 - 12855.138: 28.2015% ( 29) 00:07:41.545 12855.138 - 12905.551: 28.5074% ( 28) 00:07:41.545 12905.551 - 13006.375: 29.1740% ( 61) 00:07:41.545 13006.375 - 13107.200: 29.9497% ( 71) 00:07:41.545 13107.200 - 13208.025: 30.8020% ( 78) 00:07:41.545 13208.025 - 13308.849: 31.7417% ( 86) 00:07:41.545 13308.849 - 13409.674: 32.8125% ( 98) 00:07:41.545 13409.674 - 13510.498: 33.8068% ( 91) 00:07:41.545 13510.498 - 13611.323: 34.8339% ( 94) 00:07:41.545 13611.323 - 13712.148: 35.8610% ( 94) 00:07:41.545 13712.148 - 13812.972: 36.9974% ( 104) 00:07:41.545 13812.972 - 13913.797: 38.1228% ( 103) 00:07:41.545 13913.797 - 14014.622: 39.2483% ( 103) 00:07:41.545 14014.622 - 14115.446: 40.3191% ( 98) 00:07:41.545 14115.446 - 14216.271: 41.4663% ( 105) 00:07:41.545 14216.271 - 14317.095: 42.6246% ( 106) 00:07:41.545 14317.095 - 14417.920: 43.6407% ( 93) 00:07:41.545 14417.920 - 14518.745: 44.6788% ( 95) 00:07:41.545 14518.745 - 14619.569: 45.8042% ( 103) 00:07:41.545 14619.569 - 14720.394: 47.0498% ( 114) 00:07:41.545 14720.394 - 14821.218: 48.3719% ( 121) 00:07:41.545 14821.218 - 14922.043: 49.5520% ( 108) 00:07:41.545 14922.043 - 15022.868: 50.6884% ( 104) 00:07:41.545 15022.868 - 15123.692: 51.7810% ( 100) 00:07:41.545 15123.692 - 15224.517: 52.8081% ( 94) 00:07:41.545 15224.517 - 15325.342: 53.7915% ( 90) 00:07:41.545 15325.342 - 15426.166: 54.7749% ( 90) 00:07:41.545 15426.166 - 15526.991: 55.7802% ( 92) 00:07:41.545 15526.991 - 15627.815: 56.7854% ( 92) 00:07:41.545 15627.815 - 15728.640: 57.7360% ( 87) 00:07:41.545 15728.640 - 15829.465: 58.7631% ( 94) 00:07:41.545 15829.465 - 15930.289: 59.9323% ( 107) 00:07:41.545 15930.289 - 16031.114: 61.2544% ( 121) 00:07:41.545 16031.114 - 16131.938: 62.6311% ( 126) 00:07:41.545 16131.938 - 16232.763: 63.8330% ( 110) 00:07:41.545 16232.763 - 16333.588: 65.2972% ( 134) 00:07:41.545 16333.588 - 16434.412: 66.8816% ( 145) 00:07:41.545 16434.412 - 16535.237: 68.4222% ( 141) 00:07:41.545 16535.237 - 16636.062: 70.0830% ( 152) 00:07:41.545 16636.062 - 16736.886: 71.7330% ( 151) 00:07:41.545 16736.886 - 16837.711: 73.3173% ( 145) 00:07:41.545 16837.711 - 16938.535: 74.8470% ( 140) 00:07:41.545 16938.535 - 17039.360: 76.4095% ( 143) 00:07:41.545 17039.360 - 17140.185: 77.7753% ( 125) 00:07:41.545 17140.185 - 17241.009: 79.1302% ( 124) 00:07:41.545 17241.009 - 17341.834: 80.4305% ( 119) 00:07:41.545 17341.834 - 17442.658: 81.8510% ( 130) 00:07:41.545 17442.658 - 17543.483: 83.0092% ( 106) 00:07:41.545 17543.483 - 17644.308: 84.1892% ( 108) 00:07:41.545 17644.308 - 17745.132: 85.2382% ( 96) 00:07:41.545 17745.132 - 17845.957: 86.2434% ( 92) 00:07:41.545 17845.957 - 17946.782: 87.2487% ( 92) 00:07:41.545 17946.782 - 18047.606: 88.2758% ( 94) 00:07:41.545 18047.606 - 18148.431: 89.1499% ( 80) 00:07:41.545 18148.431 - 18249.255: 89.8820% ( 67) 00:07:41.545 18249.255 - 18350.080: 90.5157% ( 58) 00:07:41.545 18350.080 - 18450.905: 90.9856% ( 43) 00:07:41.545 18450.905 - 18551.729: 91.3352% ( 32) 00:07:41.545 18551.729 - 18652.554: 91.6302% ( 27) 00:07:41.545 18652.554 - 18753.378: 91.8925% ( 24) 00:07:41.545 18753.378 - 18854.203: 92.3405% ( 41) 00:07:41.545 18854.203 - 18955.028: 92.6136% ( 25) 00:07:41.545 18955.028 - 19055.852: 92.9196% ( 28) 00:07:41.545 19055.852 - 19156.677: 93.2365% ( 29) 00:07:41.545 19156.677 - 19257.502: 93.5970% ( 33) 00:07:41.545 19257.502 - 19358.326: 93.9576% ( 33) 00:07:41.545 19358.326 - 19459.151: 94.3728% ( 38) 00:07:41.545 19459.151 - 19559.975: 94.9082% ( 49) 00:07:41.545 19559.975 - 19660.800: 95.4655% ( 51) 00:07:41.545 19660.800 - 19761.625: 95.9899% ( 48) 00:07:41.545 19761.625 - 19862.449: 96.4598% ( 43) 00:07:41.545 19862.449 - 19963.274: 96.8969% ( 40) 00:07:41.545 19963.274 - 20064.098: 97.3011% ( 37) 00:07:41.545 20064.098 - 20164.923: 97.6617% ( 33) 00:07:41.545 20164.923 - 20265.748: 97.9677% ( 28) 00:07:41.545 20265.748 - 20366.572: 98.2408% ( 25) 00:07:41.545 20366.572 - 20467.397: 98.4047% ( 15) 00:07:41.545 20467.397 - 20568.222: 98.4594% ( 5) 00:07:41.545 20568.222 - 20669.046: 98.5249% ( 6) 00:07:41.545 20669.046 - 20769.871: 98.5795% ( 5) 00:07:41.545 20769.871 - 20870.695: 98.6014% ( 2) 00:07:41.545 27020.997 - 27222.646: 98.6123% ( 1) 00:07:41.545 27222.646 - 27424.295: 98.6997% ( 8) 00:07:41.545 27424.295 - 27625.945: 98.7762% ( 7) 00:07:41.545 27625.945 - 27827.594: 98.8636% ( 8) 00:07:41.545 27827.594 - 28029.243: 98.9510% ( 8) 00:07:41.545 28029.243 - 28230.892: 99.0385% ( 8) 00:07:41.545 28230.892 - 28432.542: 99.1259% ( 8) 00:07:41.545 28432.542 - 28634.191: 99.2242% ( 9) 00:07:41.545 28634.191 - 28835.840: 99.3007% ( 7) 00:07:41.545 36901.809 - 37103.458: 99.3444% ( 4) 00:07:41.545 37103.458 - 37305.108: 99.3990% ( 5) 00:07:41.545 37305.108 - 37506.757: 99.4646% ( 6) 00:07:41.545 37506.757 - 37708.406: 99.5520% ( 8) 00:07:41.545 37708.406 - 37910.055: 99.6503% ( 9) 00:07:41.545 37910.055 - 38111.705: 99.7268% ( 7) 00:07:41.545 38111.705 - 38313.354: 99.8252% ( 9) 00:07:41.545 38313.354 - 38515.003: 99.9235% ( 9) 00:07:41.545 38515.003 - 38716.652: 100.0000% ( 7) 00:07:41.545 00:07:41.545 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:41.545 ============================================================================== 00:07:41.545 Range in us Cumulative IO count 00:07:41.545 5494.942 - 5520.148: 0.0328% ( 3) 00:07:41.545 5520.148 - 5545.354: 0.0983% ( 6) 00:07:41.545 5545.354 - 5570.560: 0.2076% ( 10) 00:07:41.545 5570.560 - 5595.766: 0.3278% ( 11) 00:07:41.545 5595.766 - 5620.972: 0.5135% ( 17) 00:07:41.545 5620.972 - 5646.178: 0.7539% ( 22) 00:07:41.545 5646.178 - 5671.385: 0.9834% ( 21) 00:07:41.545 5671.385 - 5696.591: 1.2784% ( 27) 00:07:41.545 5696.591 - 5721.797: 1.5625% ( 26) 00:07:41.545 5721.797 - 5747.003: 1.8684% ( 28) 00:07:41.545 5747.003 - 5772.209: 2.1853% ( 29) 00:07:41.545 5772.209 - 5797.415: 2.5896% ( 37) 00:07:41.545 5797.415 - 5822.622: 2.9502% ( 33) 00:07:41.545 5822.622 - 5847.828: 3.3217% ( 34) 00:07:41.545 5847.828 - 5873.034: 3.7150% ( 36) 00:07:41.545 5873.034 - 5898.240: 4.0975% ( 35) 00:07:41.545 5898.240 - 5923.446: 4.4690% ( 34) 00:07:41.545 5923.446 - 5948.652: 4.8514% ( 35) 00:07:41.545 5948.652 - 5973.858: 5.2338% ( 35) 00:07:41.545 5973.858 - 5999.065: 5.6381% ( 37) 00:07:41.545 5999.065 - 6024.271: 6.0424% ( 37) 00:07:41.545 6024.271 - 6049.477: 6.4030% ( 33) 00:07:41.545 6049.477 - 6074.683: 6.7854% ( 35) 00:07:41.545 6074.683 - 6099.889: 7.1897% ( 37) 00:07:41.545 6099.889 - 6125.095: 7.5940% ( 37) 00:07:41.545 6125.095 - 6150.302: 8.0092% ( 38) 00:07:41.545 6150.302 - 6175.508: 8.4244% ( 38) 00:07:41.545 6175.508 - 6200.714: 8.8177% ( 36) 00:07:41.545 6200.714 - 6225.920: 9.2220% ( 37) 00:07:41.545 6225.920 - 6251.126: 9.6263% ( 37) 00:07:41.545 6251.126 - 6276.332: 10.0197% ( 36) 00:07:41.545 6276.332 - 6301.538: 10.4021% ( 35) 00:07:41.545 6301.538 - 6326.745: 10.7736% ( 34) 00:07:41.545 6326.745 - 6351.951: 11.2107% ( 40) 00:07:41.545 6351.951 - 6377.157: 11.6259% ( 38) 00:07:41.545 6377.157 - 6402.363: 11.9974% ( 34) 00:07:41.545 6402.363 - 6427.569: 12.2815% ( 26) 00:07:41.545 6427.569 - 6452.775: 12.5437% ( 24) 00:07:41.545 6452.775 - 6503.188: 12.9808% ( 40) 00:07:41.545 6503.188 - 6553.600: 13.3851% ( 37) 00:07:41.545 6553.600 - 6604.012: 13.7019% ( 29) 00:07:41.545 6604.012 - 6654.425: 14.0079% ( 28) 00:07:41.545 6654.425 - 6704.837: 14.2155% ( 19) 00:07:41.545 6704.837 - 6755.249: 14.4996% ( 26) 00:07:41.545 6755.249 - 6805.662: 14.7181% ( 20) 00:07:41.545 6805.662 - 6856.074: 15.0350% ( 29) 00:07:41.545 6856.074 - 6906.486: 15.2972% ( 24) 00:07:41.545 6906.486 - 6956.898: 15.4720% ( 16) 00:07:41.545 6956.898 - 7007.311: 15.6359% ( 15) 00:07:41.545 7007.311 - 7057.723: 15.7670% ( 12) 00:07:41.545 7057.723 - 7108.135: 15.9091% ( 13) 00:07:41.545 7108.135 - 7158.548: 16.0730% ( 15) 00:07:41.545 7158.548 - 7208.960: 16.2806% ( 19) 00:07:41.545 7208.960 - 7259.372: 16.5319% ( 23) 00:07:41.545 7259.372 - 7309.785: 16.7614% ( 21) 00:07:41.545 7309.785 - 7360.197: 16.9471% ( 17) 00:07:41.545 7360.197 - 7410.609: 17.1329% ( 17) 00:07:41.545 7410.609 - 7461.022: 17.3405% ( 19) 00:07:41.545 7461.022 - 7511.434: 17.5372% ( 18) 00:07:41.545 7511.434 - 7561.846: 17.7229% ( 17) 00:07:41.546 7561.846 - 7612.258: 17.9196% ( 18) 00:07:41.546 7612.258 - 7662.671: 18.1053% ( 17) 00:07:41.546 7662.671 - 7713.083: 18.3020% ( 18) 00:07:41.546 7713.083 - 7763.495: 18.4878% ( 17) 00:07:41.546 7763.495 - 7813.908: 18.6080% ( 11) 00:07:41.546 7813.908 - 7864.320: 18.7391% ( 12) 00:07:41.546 7864.320 - 7914.732: 18.8593% ( 11) 00:07:41.546 7914.732 - 7965.145: 18.9576% ( 9) 00:07:41.546 7965.145 - 8015.557: 19.0450% ( 8) 00:07:41.546 8015.557 - 8065.969: 19.1106% ( 6) 00:07:41.546 8065.969 - 8116.382: 19.1543% ( 4) 00:07:41.546 8116.382 - 8166.794: 19.1761% ( 2) 00:07:41.546 8166.794 - 8217.206: 19.1871% ( 1) 00:07:41.546 8217.206 - 8267.618: 19.2089% ( 2) 00:07:41.546 8267.618 - 8318.031: 19.2198% ( 1) 00:07:41.546 8318.031 - 8368.443: 19.2417% ( 2) 00:07:41.546 8368.443 - 8418.855: 19.2526% ( 1) 00:07:41.546 8418.855 - 8469.268: 19.2745% ( 2) 00:07:41.546 8469.268 - 8519.680: 19.2963% ( 2) 00:07:41.546 8519.680 - 8570.092: 19.3073% ( 1) 00:07:41.546 8570.092 - 8620.505: 19.3291% ( 2) 00:07:41.546 8620.505 - 8670.917: 19.3510% ( 2) 00:07:41.546 8670.917 - 8721.329: 19.3728% ( 2) 00:07:41.546 8721.329 - 8771.742: 19.3947% ( 2) 00:07:41.546 8771.742 - 8822.154: 19.4165% ( 2) 00:07:41.546 8822.154 - 8872.566: 19.4274% ( 1) 00:07:41.546 8872.566 - 8922.978: 19.4602% ( 3) 00:07:41.546 8922.978 - 8973.391: 19.4821% ( 2) 00:07:41.546 8973.391 - 9023.803: 19.5149% ( 3) 00:07:41.546 9023.803 - 9074.215: 19.5804% ( 6) 00:07:41.546 9074.215 - 9124.628: 19.6460% ( 6) 00:07:41.546 9124.628 - 9175.040: 19.7115% ( 6) 00:07:41.546 9175.040 - 9225.452: 19.7443% ( 3) 00:07:41.546 9225.452 - 9275.865: 19.8099% ( 6) 00:07:41.546 9275.865 - 9326.277: 19.8536% ( 4) 00:07:41.546 9326.277 - 9376.689: 19.8973% ( 4) 00:07:41.546 9376.689 - 9427.102: 19.9519% ( 5) 00:07:41.546 9427.102 - 9477.514: 20.0175% ( 6) 00:07:41.546 9477.514 - 9527.926: 20.0721% ( 5) 00:07:41.546 9527.926 - 9578.338: 20.1267% ( 5) 00:07:41.546 9578.338 - 9628.751: 20.1814% ( 5) 00:07:41.546 9628.751 - 9679.163: 20.2469% ( 6) 00:07:41.546 9679.163 - 9729.575: 20.3016% ( 5) 00:07:41.546 9729.575 - 9779.988: 20.3562% ( 5) 00:07:41.546 9779.988 - 9830.400: 20.4327% ( 7) 00:07:41.546 9830.400 - 9880.812: 20.5092% ( 7) 00:07:41.546 9880.812 - 9931.225: 20.5747% ( 6) 00:07:41.546 9931.225 - 9981.637: 20.6622% ( 8) 00:07:41.546 9981.637 - 10032.049: 20.7386% ( 7) 00:07:41.546 10032.049 - 10082.462: 20.7933% ( 5) 00:07:41.546 10082.462 - 10132.874: 20.8260% ( 3) 00:07:41.546 10132.874 - 10183.286: 20.8698% ( 4) 00:07:41.546 10183.286 - 10233.698: 20.9135% ( 4) 00:07:41.546 10233.698 - 10284.111: 20.9572% ( 4) 00:07:41.546 10284.111 - 10334.523: 21.0009% ( 4) 00:07:41.546 10334.523 - 10384.935: 21.0555% ( 5) 00:07:41.546 10384.935 - 10435.348: 21.0992% ( 4) 00:07:41.546 10435.348 - 10485.760: 21.1648% ( 6) 00:07:41.546 10485.760 - 10536.172: 21.2413% ( 7) 00:07:41.546 10536.172 - 10586.585: 21.3177% ( 7) 00:07:41.546 10586.585 - 10636.997: 21.3942% ( 7) 00:07:41.546 10636.997 - 10687.409: 21.4598% ( 6) 00:07:41.546 10687.409 - 10737.822: 21.5581% ( 9) 00:07:41.546 10737.822 - 10788.234: 21.6346% ( 7) 00:07:41.546 10788.234 - 10838.646: 21.7220% ( 8) 00:07:41.546 10838.646 - 10889.058: 21.7985% ( 7) 00:07:41.546 10889.058 - 10939.471: 21.8859% ( 8) 00:07:41.546 10939.471 - 10989.883: 21.9515% ( 6) 00:07:41.546 10989.883 - 11040.295: 22.0280% ( 7) 00:07:41.546 11040.295 - 11090.708: 22.1045% ( 7) 00:07:41.546 11090.708 - 11141.120: 22.1700% ( 6) 00:07:41.546 11141.120 - 11191.532: 22.2356% ( 6) 00:07:41.546 11191.532 - 11241.945: 22.3230% ( 8) 00:07:41.546 11241.945 - 11292.357: 22.3995% ( 7) 00:07:41.546 11292.357 - 11342.769: 22.5197% ( 11) 00:07:41.546 11342.769 - 11393.182: 22.6071% ( 8) 00:07:41.546 11393.182 - 11443.594: 22.7054% ( 9) 00:07:41.546 11443.594 - 11494.006: 22.7928% ( 8) 00:07:41.546 11494.006 - 11544.418: 22.8912% ( 9) 00:07:41.546 11544.418 - 11594.831: 22.9567% ( 6) 00:07:41.546 11594.831 - 11645.243: 23.0332% ( 7) 00:07:41.546 11645.243 - 11695.655: 23.1206% ( 8) 00:07:41.546 11695.655 - 11746.068: 23.2845% ( 15) 00:07:41.546 11746.068 - 11796.480: 23.4156% ( 12) 00:07:41.546 11796.480 - 11846.892: 23.5686% ( 14) 00:07:41.546 11846.892 - 11897.305: 23.7216% ( 14) 00:07:41.546 11897.305 - 11947.717: 23.8964% ( 16) 00:07:41.546 11947.717 - 11998.129: 24.0166% ( 11) 00:07:41.546 11998.129 - 12048.542: 24.1805% ( 15) 00:07:41.546 12048.542 - 12098.954: 24.3444% ( 15) 00:07:41.546 12098.954 - 12149.366: 24.4646% ( 11) 00:07:41.546 12149.366 - 12199.778: 24.6285% ( 15) 00:07:41.546 12199.778 - 12250.191: 24.8033% ( 16) 00:07:41.546 12250.191 - 12300.603: 24.9563% ( 14) 00:07:41.546 12300.603 - 12351.015: 25.1202% ( 15) 00:07:41.546 12351.015 - 12401.428: 25.2950% ( 16) 00:07:41.546 12401.428 - 12451.840: 25.5135% ( 20) 00:07:41.546 12451.840 - 12502.252: 25.7321% ( 20) 00:07:41.546 12502.252 - 12552.665: 25.9506% ( 20) 00:07:41.546 12552.665 - 12603.077: 26.1582% ( 19) 00:07:41.546 12603.077 - 12653.489: 26.3986% ( 22) 00:07:41.546 12653.489 - 12703.902: 26.6827% ( 26) 00:07:41.546 12703.902 - 12754.314: 26.9340% ( 23) 00:07:41.546 12754.314 - 12804.726: 27.2618% ( 30) 00:07:41.546 12804.726 - 12855.138: 27.5022% ( 22) 00:07:41.546 12855.138 - 12905.551: 27.7753% ( 25) 00:07:41.546 12905.551 - 13006.375: 28.4200% ( 59) 00:07:41.546 13006.375 - 13107.200: 29.2941% ( 80) 00:07:41.546 13107.200 - 13208.025: 30.1246% ( 76) 00:07:41.546 13208.025 - 13308.849: 31.0096% ( 81) 00:07:41.546 13308.849 - 13409.674: 31.7854% ( 71) 00:07:41.546 13409.674 - 13510.498: 32.6267% ( 77) 00:07:41.546 13510.498 - 13611.323: 33.6101% ( 90) 00:07:41.546 13611.323 - 13712.148: 34.6700% ( 97) 00:07:41.546 13712.148 - 13812.972: 35.8392% ( 107) 00:07:41.546 13812.972 - 13913.797: 36.9865% ( 105) 00:07:41.546 13913.797 - 14014.622: 37.9698% ( 90) 00:07:41.546 14014.622 - 14115.446: 38.9205% ( 87) 00:07:41.546 14115.446 - 14216.271: 39.8383% ( 84) 00:07:41.546 14216.271 - 14317.095: 40.6250% ( 72) 00:07:41.546 14317.095 - 14417.920: 41.5101% ( 81) 00:07:41.546 14417.920 - 14518.745: 42.4607% ( 87) 00:07:41.546 14518.745 - 14619.569: 43.5424% ( 99) 00:07:41.546 14619.569 - 14720.394: 44.5367% ( 91) 00:07:41.546 14720.394 - 14821.218: 45.5638% ( 94) 00:07:41.546 14821.218 - 14922.043: 46.4270% ( 79) 00:07:41.546 14922.043 - 15022.868: 47.2247% ( 73) 00:07:41.546 15022.868 - 15123.692: 48.2845% ( 97) 00:07:41.546 15123.692 - 15224.517: 49.2679% ( 90) 00:07:41.546 15224.517 - 15325.342: 50.5026% ( 113) 00:07:41.546 15325.342 - 15426.166: 51.9668% ( 134) 00:07:41.546 15426.166 - 15526.991: 53.6604% ( 155) 00:07:41.546 15526.991 - 15627.815: 55.4633% ( 165) 00:07:41.546 15627.815 - 15728.640: 57.0695% ( 147) 00:07:41.546 15728.640 - 15829.465: 58.8833% ( 166) 00:07:41.546 15829.465 - 15930.289: 60.8392% ( 179) 00:07:41.546 15930.289 - 16031.114: 62.9917% ( 197) 00:07:41.546 16031.114 - 16131.938: 64.8820% ( 173) 00:07:41.546 16131.938 - 16232.763: 66.7177% ( 168) 00:07:41.546 16232.763 - 16333.588: 68.5205% ( 165) 00:07:41.546 16333.588 - 16434.412: 70.3344% ( 166) 00:07:41.546 16434.412 - 16535.237: 72.1700% ( 168) 00:07:41.546 16535.237 - 16636.062: 73.7434% ( 144) 00:07:41.546 16636.062 - 16736.886: 75.1311% ( 127) 00:07:41.546 16736.886 - 16837.711: 76.4532% ( 121) 00:07:41.546 16837.711 - 16938.535: 77.8300% ( 126) 00:07:41.546 16938.535 - 17039.360: 78.9336% ( 101) 00:07:41.546 17039.360 - 17140.185: 80.0044% ( 98) 00:07:41.546 17140.185 - 17241.009: 81.0970% ( 100) 00:07:41.546 17241.009 - 17341.834: 82.0149% ( 84) 00:07:41.546 17341.834 - 17442.658: 82.8125% ( 73) 00:07:41.546 17442.658 - 17543.483: 83.7522% ( 86) 00:07:41.546 17543.483 - 17644.308: 84.6809% ( 85) 00:07:41.546 17644.308 - 17745.132: 85.7517% ( 98) 00:07:41.546 17745.132 - 17845.957: 86.8772% ( 103) 00:07:41.546 17845.957 - 17946.782: 87.8934% ( 93) 00:07:41.546 17946.782 - 18047.606: 88.8549% ( 88) 00:07:41.546 18047.606 - 18148.431: 89.7618% ( 83) 00:07:41.546 18148.431 - 18249.255: 90.6796% ( 84) 00:07:41.546 18249.255 - 18350.080: 91.4663% ( 72) 00:07:41.546 18350.080 - 18450.905: 92.1656% ( 64) 00:07:41.546 18450.905 - 18551.729: 92.7666% ( 55) 00:07:41.546 18551.729 - 18652.554: 93.3894% ( 57) 00:07:41.546 18652.554 - 18753.378: 93.9795% ( 54) 00:07:41.546 18753.378 - 18854.203: 94.4384% ( 42) 00:07:41.547 18854.203 - 18955.028: 94.7771% ( 31) 00:07:41.547 18955.028 - 19055.852: 95.0503% ( 25) 00:07:41.547 19055.852 - 19156.677: 95.3344% ( 26) 00:07:41.547 19156.677 - 19257.502: 95.6075% ( 25) 00:07:41.547 19257.502 - 19358.326: 95.8479% ( 22) 00:07:41.547 19358.326 - 19459.151: 96.1648% ( 29) 00:07:41.547 19459.151 - 19559.975: 96.4379% ( 25) 00:07:41.547 19559.975 - 19660.800: 96.7111% ( 25) 00:07:41.547 19660.800 - 19761.625: 96.9733% ( 24) 00:07:41.547 19761.625 - 19862.449: 97.3121% ( 31) 00:07:41.547 19862.449 - 19963.274: 97.5743% ( 24) 00:07:41.547 19963.274 - 20064.098: 97.8147% ( 22) 00:07:41.547 20064.098 - 20164.923: 98.0441% ( 21) 00:07:41.547 20164.923 - 20265.748: 98.1862% ( 13) 00:07:41.547 20265.748 - 20366.572: 98.3064% ( 11) 00:07:41.547 20366.572 - 20467.397: 98.3719% ( 6) 00:07:41.547 20467.397 - 20568.222: 98.4484% ( 7) 00:07:41.547 20568.222 - 20669.046: 98.4921% ( 4) 00:07:41.547 20669.046 - 20769.871: 98.5358% ( 4) 00:07:41.547 20769.871 - 20870.695: 98.5905% ( 5) 00:07:41.547 20870.695 - 20971.520: 98.6014% ( 1) 00:07:41.547 26617.698 - 26819.348: 98.6342% ( 3) 00:07:41.547 26819.348 - 27020.997: 98.7216% ( 8) 00:07:41.547 27020.997 - 27222.646: 98.8090% ( 8) 00:07:41.547 27222.646 - 27424.295: 98.8964% ( 8) 00:07:41.547 27424.295 - 27625.945: 98.9948% ( 9) 00:07:41.547 27625.945 - 27827.594: 99.0822% ( 8) 00:07:41.547 27827.594 - 28029.243: 99.1696% ( 8) 00:07:41.547 28029.243 - 28230.892: 99.2570% ( 8) 00:07:41.547 28230.892 - 28432.542: 99.3007% ( 4) 00:07:41.547 37103.458 - 37305.108: 99.3226% ( 2) 00:07:41.547 37305.108 - 37506.757: 99.3772% ( 5) 00:07:41.547 37506.757 - 37708.406: 99.4427% ( 6) 00:07:41.547 37708.406 - 37910.055: 99.4974% ( 5) 00:07:41.547 37910.055 - 38111.705: 99.5629% ( 6) 00:07:41.547 38111.705 - 38313.354: 99.6285% ( 6) 00:07:41.547 38313.354 - 38515.003: 99.6941% ( 6) 00:07:41.547 38515.003 - 38716.652: 99.7487% ( 5) 00:07:41.547 38716.652 - 38918.302: 99.8033% ( 5) 00:07:41.547 38918.302 - 39119.951: 99.8470% ( 4) 00:07:41.547 39119.951 - 39321.600: 99.9126% ( 6) 00:07:41.547 39321.600 - 39523.249: 99.9781% ( 6) 00:07:41.547 39523.249 - 39724.898: 100.0000% ( 2) 00:07:41.547 00:07:41.547 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:41.547 ============================================================================== 00:07:41.547 Range in us Cumulative IO count 00:07:41.547 5469.735 - 5494.942: 0.0219% ( 2) 00:07:41.547 5494.942 - 5520.148: 0.0328% ( 1) 00:07:41.547 5520.148 - 5545.354: 0.0656% ( 3) 00:07:41.547 5545.354 - 5570.560: 0.1967% ( 12) 00:07:41.547 5570.560 - 5595.766: 0.3278% ( 12) 00:07:41.547 5595.766 - 5620.972: 0.4808% ( 14) 00:07:41.547 5620.972 - 5646.178: 0.6993% ( 20) 00:07:41.547 5646.178 - 5671.385: 0.9288% ( 21) 00:07:41.547 5671.385 - 5696.591: 1.2238% ( 27) 00:07:41.547 5696.591 - 5721.797: 1.5188% ( 27) 00:07:41.547 5721.797 - 5747.003: 1.8247% ( 28) 00:07:41.547 5747.003 - 5772.209: 2.1416% ( 29) 00:07:41.547 5772.209 - 5797.415: 2.4913% ( 32) 00:07:41.547 5797.415 - 5822.622: 2.8518% ( 33) 00:07:41.547 5822.622 - 5847.828: 3.2233% ( 34) 00:07:41.547 5847.828 - 5873.034: 3.5621% ( 31) 00:07:41.547 5873.034 - 5898.240: 3.9117% ( 32) 00:07:41.547 5898.240 - 5923.446: 4.2723% ( 33) 00:07:41.547 5923.446 - 5948.652: 4.6547% ( 35) 00:07:41.547 5948.652 - 5973.858: 5.0481% ( 36) 00:07:41.547 5973.858 - 5999.065: 5.4524% ( 37) 00:07:41.547 5999.065 - 6024.271: 5.8348% ( 35) 00:07:41.547 6024.271 - 6049.477: 6.2172% ( 35) 00:07:41.547 6049.477 - 6074.683: 6.6434% ( 39) 00:07:41.547 6074.683 - 6099.889: 7.0695% ( 39) 00:07:41.547 6099.889 - 6125.095: 7.5066% ( 40) 00:07:41.547 6125.095 - 6150.302: 7.9218% ( 38) 00:07:41.547 6150.302 - 6175.508: 8.3370% ( 38) 00:07:41.547 6175.508 - 6200.714: 8.7631% ( 39) 00:07:41.547 6200.714 - 6225.920: 9.1892% ( 39) 00:07:41.547 6225.920 - 6251.126: 9.5935% ( 37) 00:07:41.547 6251.126 - 6276.332: 10.0197% ( 39) 00:07:41.547 6276.332 - 6301.538: 10.4349% ( 38) 00:07:41.547 6301.538 - 6326.745: 10.8501% ( 38) 00:07:41.547 6326.745 - 6351.951: 11.2434% ( 36) 00:07:41.547 6351.951 - 6377.157: 11.6368% ( 36) 00:07:41.547 6377.157 - 6402.363: 11.9865% ( 32) 00:07:41.547 6402.363 - 6427.569: 12.2596% ( 25) 00:07:41.547 6427.569 - 6452.775: 12.5328% ( 25) 00:07:41.547 6452.775 - 6503.188: 12.9808% ( 41) 00:07:41.547 6503.188 - 6553.600: 13.2976% ( 29) 00:07:41.547 6553.600 - 6604.012: 13.5599% ( 24) 00:07:41.547 6604.012 - 6654.425: 13.7675% ( 19) 00:07:41.547 6654.425 - 6704.837: 14.0406% ( 25) 00:07:41.547 6704.837 - 6755.249: 14.3466% ( 28) 00:07:41.547 6755.249 - 6805.662: 14.5979% ( 23) 00:07:41.547 6805.662 - 6856.074: 14.8055% ( 19) 00:07:41.547 6856.074 - 6906.486: 15.0131% ( 19) 00:07:41.547 6906.486 - 6956.898: 15.1442% ( 12) 00:07:41.547 6956.898 - 7007.311: 15.3300% ( 17) 00:07:41.547 7007.311 - 7057.723: 15.5048% ( 16) 00:07:41.547 7057.723 - 7108.135: 15.6906% ( 17) 00:07:41.547 7108.135 - 7158.548: 15.9309% ( 22) 00:07:41.547 7158.548 - 7208.960: 16.1823% ( 23) 00:07:41.547 7208.960 - 7259.372: 16.4117% ( 21) 00:07:41.547 7259.372 - 7309.785: 16.7067% ( 27) 00:07:41.547 7309.785 - 7360.197: 16.9471% ( 22) 00:07:41.547 7360.197 - 7410.609: 17.2203% ( 25) 00:07:41.547 7410.609 - 7461.022: 17.4607% ( 22) 00:07:41.547 7461.022 - 7511.434: 17.6901% ( 21) 00:07:41.547 7511.434 - 7561.846: 17.9414% ( 23) 00:07:41.547 7561.846 - 7612.258: 18.1818% ( 22) 00:07:41.547 7612.258 - 7662.671: 18.3348% ( 14) 00:07:41.547 7662.671 - 7713.083: 18.4768% ( 13) 00:07:41.547 7713.083 - 7763.495: 18.6189% ( 13) 00:07:41.547 7763.495 - 7813.908: 18.7391% ( 11) 00:07:41.547 7813.908 - 7864.320: 18.8374% ( 9) 00:07:41.547 7864.320 - 7914.732: 18.9467% ( 10) 00:07:41.547 7914.732 - 7965.145: 19.0341% ( 8) 00:07:41.547 7965.145 - 8015.557: 19.0887% ( 5) 00:07:41.547 8015.557 - 8065.969: 19.1543% ( 6) 00:07:41.547 8065.969 - 8116.382: 19.2198% ( 6) 00:07:41.547 8116.382 - 8166.794: 19.2854% ( 6) 00:07:41.547 8166.794 - 8217.206: 19.3182% ( 3) 00:07:41.547 8217.206 - 8267.618: 19.3510% ( 3) 00:07:41.547 8267.618 - 8318.031: 19.3728% ( 2) 00:07:41.547 8318.031 - 8368.443: 19.4056% ( 3) 00:07:41.547 8368.443 - 8418.855: 19.4384% ( 3) 00:07:41.547 8418.855 - 8469.268: 19.4712% ( 3) 00:07:41.547 8469.268 - 8519.680: 19.4930% ( 2) 00:07:41.547 8519.680 - 8570.092: 19.5149% ( 2) 00:07:41.547 8570.092 - 8620.505: 19.5367% ( 2) 00:07:41.547 8620.505 - 8670.917: 19.5695% ( 3) 00:07:41.547 8670.917 - 8721.329: 19.5804% ( 1) 00:07:41.547 9124.628 - 9175.040: 19.6132% ( 3) 00:07:41.547 9175.040 - 9225.452: 19.6569% ( 4) 00:07:41.547 9225.452 - 9275.865: 19.6678% ( 1) 00:07:41.547 9275.865 - 9326.277: 19.7006% ( 3) 00:07:41.547 9326.277 - 9376.689: 19.7115% ( 1) 00:07:41.547 9376.689 - 9427.102: 19.7443% ( 3) 00:07:41.547 9427.102 - 9477.514: 19.7662% ( 2) 00:07:41.547 9477.514 - 9527.926: 19.8099% ( 4) 00:07:41.547 9527.926 - 9578.338: 19.8317% ( 2) 00:07:41.547 9578.338 - 9628.751: 19.8864% ( 5) 00:07:41.547 9628.751 - 9679.163: 19.9301% ( 4) 00:07:41.547 9679.163 - 9729.575: 20.0066% ( 7) 00:07:41.547 9729.575 - 9779.988: 20.0830% ( 7) 00:07:41.547 9779.988 - 9830.400: 20.1705% ( 8) 00:07:41.547 9830.400 - 9880.812: 20.2688% ( 9) 00:07:41.547 9880.812 - 9931.225: 20.3453% ( 7) 00:07:41.547 9931.225 - 9981.637: 20.4327% ( 8) 00:07:41.547 9981.637 - 10032.049: 20.4983% ( 6) 00:07:41.547 10032.049 - 10082.462: 20.5747% ( 7) 00:07:41.547 10082.462 - 10132.874: 20.6622% ( 8) 00:07:41.547 10132.874 - 10183.286: 20.7386% ( 7) 00:07:41.547 10183.286 - 10233.698: 20.8151% ( 7) 00:07:41.547 10233.698 - 10284.111: 20.9025% ( 8) 00:07:41.547 10284.111 - 10334.523: 20.9899% ( 8) 00:07:41.547 10334.523 - 10384.935: 21.0664% ( 7) 00:07:41.547 10384.935 - 10435.348: 21.1320% ( 6) 00:07:41.547 10435.348 - 10485.760: 21.1866% ( 5) 00:07:41.547 10485.760 - 10536.172: 21.2413% ( 5) 00:07:41.547 10536.172 - 10586.585: 21.2959% ( 5) 00:07:41.547 10586.585 - 10636.997: 21.3505% ( 5) 00:07:41.547 10636.997 - 10687.409: 21.3942% ( 4) 00:07:41.547 10687.409 - 10737.822: 21.4161% ( 2) 00:07:41.547 10737.822 - 10788.234: 21.4598% ( 4) 00:07:41.547 10788.234 - 10838.646: 21.5035% ( 4) 00:07:41.547 10838.646 - 10889.058: 21.5472% ( 4) 00:07:41.547 10889.058 - 10939.471: 21.6018% ( 5) 00:07:41.547 10939.471 - 10989.883: 21.6783% ( 7) 00:07:41.547 10989.883 - 11040.295: 21.7548% ( 7) 00:07:41.548 11040.295 - 11090.708: 21.8313% ( 7) 00:07:41.548 11090.708 - 11141.120: 21.9187% ( 8) 00:07:41.548 11141.120 - 11191.532: 21.9624% ( 4) 00:07:41.548 11191.532 - 11241.945: 22.0280% ( 6) 00:07:41.548 11241.945 - 11292.357: 22.1372% ( 10) 00:07:41.548 11292.357 - 11342.769: 22.2137% ( 7) 00:07:41.548 11342.769 - 11393.182: 22.2574% ( 4) 00:07:41.548 11393.182 - 11443.594: 22.3339% ( 7) 00:07:41.548 11443.594 - 11494.006: 22.4104% ( 7) 00:07:41.548 11494.006 - 11544.418: 22.5197% ( 10) 00:07:41.548 11544.418 - 11594.831: 22.6180% ( 9) 00:07:41.548 11594.831 - 11645.243: 22.7163% ( 9) 00:07:41.548 11645.243 - 11695.655: 22.8693% ( 14) 00:07:41.548 11695.655 - 11746.068: 22.9677% ( 9) 00:07:41.548 11746.068 - 11796.480: 23.0988% ( 12) 00:07:41.548 11796.480 - 11846.892: 23.2517% ( 14) 00:07:41.548 11846.892 - 11897.305: 23.3829% ( 12) 00:07:41.548 11897.305 - 11947.717: 23.5249% ( 13) 00:07:41.548 11947.717 - 11998.129: 23.6451% ( 11) 00:07:41.548 11998.129 - 12048.542: 23.7653% ( 11) 00:07:41.548 12048.542 - 12098.954: 23.9073% ( 13) 00:07:41.548 12098.954 - 12149.366: 24.1368% ( 21) 00:07:41.548 12149.366 - 12199.778: 24.3663% ( 21) 00:07:41.548 12199.778 - 12250.191: 24.5739% ( 19) 00:07:41.548 12250.191 - 12300.603: 24.8142% ( 22) 00:07:41.548 12300.603 - 12351.015: 25.1420% ( 30) 00:07:41.548 12351.015 - 12401.428: 25.5354% ( 36) 00:07:41.548 12401.428 - 12451.840: 25.8632% ( 30) 00:07:41.548 12451.840 - 12502.252: 26.2456% ( 35) 00:07:41.548 12502.252 - 12552.665: 26.5297% ( 26) 00:07:41.548 12552.665 - 12603.077: 26.8684% ( 31) 00:07:41.548 12603.077 - 12653.489: 27.1962% ( 30) 00:07:41.548 12653.489 - 12703.902: 27.4913% ( 27) 00:07:41.548 12703.902 - 12754.314: 27.9392% ( 41) 00:07:41.548 12754.314 - 12804.726: 28.3108% ( 34) 00:07:41.548 12804.726 - 12855.138: 28.6823% ( 34) 00:07:41.548 12855.138 - 12905.551: 29.0865% ( 37) 00:07:41.548 12905.551 - 13006.375: 30.0044% ( 84) 00:07:41.548 13006.375 - 13107.200: 30.8566% ( 78) 00:07:41.548 13107.200 - 13208.025: 31.7635% ( 83) 00:07:41.548 13208.025 - 13308.849: 32.6267% ( 79) 00:07:41.548 13308.849 - 13409.674: 33.4025% ( 71) 00:07:41.548 13409.674 - 13510.498: 34.1237% ( 66) 00:07:41.548 13510.498 - 13611.323: 35.0962% ( 89) 00:07:41.548 13611.323 - 13712.148: 35.9703% ( 80) 00:07:41.548 13712.148 - 13812.972: 36.8444% ( 80) 00:07:41.548 13812.972 - 13913.797: 37.5656% ( 66) 00:07:41.548 13913.797 - 14014.622: 38.2212% ( 60) 00:07:41.548 14014.622 - 14115.446: 38.8877% ( 61) 00:07:41.548 14115.446 - 14216.271: 39.5979% ( 65) 00:07:41.548 14216.271 - 14317.095: 40.3955% ( 73) 00:07:41.548 14317.095 - 14417.920: 41.2150% ( 75) 00:07:41.548 14417.920 - 14518.745: 42.1438% ( 85) 00:07:41.548 14518.745 - 14619.569: 43.1381% ( 91) 00:07:41.548 14619.569 - 14720.394: 44.1871% ( 96) 00:07:41.548 14720.394 - 14821.218: 45.2906% ( 101) 00:07:41.548 14821.218 - 14922.043: 46.3287% ( 95) 00:07:41.548 14922.043 - 15022.868: 47.7491% ( 130) 00:07:41.548 15022.868 - 15123.692: 49.1368% ( 127) 00:07:41.548 15123.692 - 15224.517: 50.4371% ( 119) 00:07:41.548 15224.517 - 15325.342: 51.8794% ( 132) 00:07:41.548 15325.342 - 15426.166: 53.2452% ( 125) 00:07:41.548 15426.166 - 15526.991: 54.7312% ( 136) 00:07:41.548 15526.991 - 15627.815: 56.3156% ( 145) 00:07:41.548 15627.815 - 15728.640: 57.8671% ( 142) 00:07:41.548 15728.640 - 15829.465: 59.4078% ( 141) 00:07:41.548 15829.465 - 15930.289: 61.0249% ( 148) 00:07:41.548 15930.289 - 16031.114: 62.7185% ( 155) 00:07:41.548 16031.114 - 16131.938: 64.2264% ( 138) 00:07:41.548 16131.938 - 16232.763: 65.8217% ( 146) 00:07:41.548 16232.763 - 16333.588: 67.3405% ( 139) 00:07:41.548 16333.588 - 16434.412: 68.8046% ( 134) 00:07:41.548 16434.412 - 16535.237: 70.3234% ( 139) 00:07:41.548 16535.237 - 16636.062: 71.9624% ( 150) 00:07:41.548 16636.062 - 16736.886: 73.7434% ( 163) 00:07:41.548 16736.886 - 16837.711: 75.3278% ( 145) 00:07:41.548 16837.711 - 16938.535: 76.8794% ( 142) 00:07:41.548 16938.535 - 17039.360: 78.3763% ( 137) 00:07:41.548 17039.360 - 17140.185: 79.6766% ( 119) 00:07:41.548 17140.185 - 17241.009: 80.9003% ( 112) 00:07:41.548 17241.009 - 17341.834: 82.1241% ( 112) 00:07:41.548 17341.834 - 17442.658: 83.2386% ( 102) 00:07:41.548 17442.658 - 17543.483: 84.2111% ( 89) 00:07:41.548 17543.483 - 17644.308: 85.0087% ( 73) 00:07:41.548 17644.308 - 17745.132: 85.8282% ( 75) 00:07:41.548 17745.132 - 17845.957: 86.5275% ( 64) 00:07:41.548 17845.957 - 17946.782: 87.2050% ( 62) 00:07:41.548 17946.782 - 18047.606: 88.0573% ( 78) 00:07:41.548 18047.606 - 18148.431: 88.8330% ( 71) 00:07:41.548 18148.431 - 18249.255: 89.6088% ( 71) 00:07:41.548 18249.255 - 18350.080: 90.3409% ( 67) 00:07:41.548 18350.080 - 18450.905: 91.1713% ( 76) 00:07:41.548 18450.905 - 18551.729: 91.9362% ( 70) 00:07:41.548 18551.729 - 18652.554: 92.5918% ( 60) 00:07:41.548 18652.554 - 18753.378: 93.2911% ( 64) 00:07:41.548 18753.378 - 18854.203: 93.9904% ( 64) 00:07:41.548 18854.203 - 18955.028: 94.6351% ( 59) 00:07:41.548 18955.028 - 19055.852: 95.1595% ( 48) 00:07:41.548 19055.852 - 19156.677: 95.6294% ( 43) 00:07:41.548 19156.677 - 19257.502: 96.0009% ( 34) 00:07:41.548 19257.502 - 19358.326: 96.2303% ( 21) 00:07:41.548 19358.326 - 19459.151: 96.5035% ( 25) 00:07:41.548 19459.151 - 19559.975: 96.7111% ( 19) 00:07:41.548 19559.975 - 19660.800: 96.9843% ( 25) 00:07:41.548 19660.800 - 19761.625: 97.1700% ( 17) 00:07:41.548 19761.625 - 19862.449: 97.3558% ( 17) 00:07:41.548 19862.449 - 19963.274: 97.5852% ( 21) 00:07:41.548 19963.274 - 20064.098: 97.7273% ( 13) 00:07:41.548 20064.098 - 20164.923: 97.8584% ( 12) 00:07:41.548 20164.923 - 20265.748: 97.9895% ( 12) 00:07:41.548 20265.748 - 20366.572: 98.1316% ( 13) 00:07:41.548 20366.572 - 20467.397: 98.2517% ( 11) 00:07:41.548 20467.397 - 20568.222: 98.3829% ( 12) 00:07:41.548 20568.222 - 20669.046: 98.4812% ( 9) 00:07:41.548 20669.046 - 20769.871: 98.5468% ( 6) 00:07:41.548 20769.871 - 20870.695: 98.6014% ( 5) 00:07:41.548 25206.154 - 25306.978: 98.6342% ( 3) 00:07:41.548 25306.978 - 25407.803: 98.6779% ( 4) 00:07:41.548 25407.803 - 25508.628: 98.7216% ( 4) 00:07:41.548 25508.628 - 25609.452: 98.7653% ( 4) 00:07:41.548 25609.452 - 25710.277: 98.8090% ( 4) 00:07:41.548 25710.277 - 25811.102: 98.8527% ( 4) 00:07:41.548 25811.102 - 26012.751: 98.9401% ( 8) 00:07:41.548 26012.751 - 26214.400: 99.0275% ( 8) 00:07:41.548 26214.400 - 26416.049: 99.1149% ( 8) 00:07:41.548 26416.049 - 26617.698: 99.2133% ( 9) 00:07:41.548 26617.698 - 26819.348: 99.3007% ( 8) 00:07:41.548 36095.212 - 36296.862: 99.3226% ( 2) 00:07:41.548 36296.862 - 36498.511: 99.3772% ( 5) 00:07:41.548 36498.511 - 36700.160: 99.4318% ( 5) 00:07:41.548 36700.160 - 36901.809: 99.4974% ( 6) 00:07:41.548 36901.809 - 37103.458: 99.5629% ( 6) 00:07:41.548 37103.458 - 37305.108: 99.6176% ( 5) 00:07:41.548 37305.108 - 37506.757: 99.6722% ( 5) 00:07:41.548 37506.757 - 37708.406: 99.7378% ( 6) 00:07:41.548 37708.406 - 37910.055: 99.8033% ( 6) 00:07:41.548 37910.055 - 38111.705: 99.8689% ( 6) 00:07:41.548 38111.705 - 38313.354: 99.9235% ( 5) 00:07:41.548 38313.354 - 38515.003: 99.9891% ( 6) 00:07:41.548 38515.003 - 38716.652: 100.0000% ( 1) 00:07:41.548 00:07:41.548 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:41.548 ============================================================================== 00:07:41.548 Range in us Cumulative IO count 00:07:41.548 5494.942 - 5520.148: 0.0437% ( 4) 00:07:41.548 5520.148 - 5545.354: 0.1530% ( 10) 00:07:41.548 5545.354 - 5570.560: 0.2513% ( 9) 00:07:41.548 5570.560 - 5595.766: 0.4152% ( 15) 00:07:41.548 5595.766 - 5620.972: 0.5791% ( 15) 00:07:41.548 5620.972 - 5646.178: 0.7758% ( 18) 00:07:41.548 5646.178 - 5671.385: 1.0380% ( 24) 00:07:41.548 5671.385 - 5696.591: 1.2893% ( 23) 00:07:41.548 5696.591 - 5721.797: 1.5734% ( 26) 00:07:41.548 5721.797 - 5747.003: 1.9012% ( 30) 00:07:41.548 5747.003 - 5772.209: 2.2181% ( 29) 00:07:41.548 5772.209 - 5797.415: 2.5896% ( 34) 00:07:41.548 5797.415 - 5822.622: 2.9392% ( 32) 00:07:41.548 5822.622 - 5847.828: 3.3108% ( 34) 00:07:41.548 5847.828 - 5873.034: 3.6385% ( 30) 00:07:41.548 5873.034 - 5898.240: 3.9773% ( 31) 00:07:41.548 5898.240 - 5923.446: 4.3488% ( 34) 00:07:41.548 5923.446 - 5948.652: 4.6984% ( 32) 00:07:41.548 5948.652 - 5973.858: 5.0809% ( 35) 00:07:41.548 5973.858 - 5999.065: 5.4633% ( 35) 00:07:41.548 5999.065 - 6024.271: 5.8785% ( 38) 00:07:41.548 6024.271 - 6049.477: 6.2828% ( 37) 00:07:41.548 6049.477 - 6074.683: 6.6434% ( 33) 00:07:41.548 6074.683 - 6099.889: 7.0695% ( 39) 00:07:41.548 6099.889 - 6125.095: 7.4738% ( 37) 00:07:41.548 6125.095 - 6150.302: 7.8890% ( 38) 00:07:41.548 6150.302 - 6175.508: 8.3260% ( 40) 00:07:41.548 6175.508 - 6200.714: 8.7522% ( 39) 00:07:41.548 6200.714 - 6225.920: 9.1783% ( 39) 00:07:41.548 6225.920 - 6251.126: 9.5935% ( 38) 00:07:41.548 6251.126 - 6276.332: 10.0306% ( 40) 00:07:41.548 6276.332 - 6301.538: 10.4567% ( 39) 00:07:41.548 6301.538 - 6326.745: 10.8610% ( 37) 00:07:41.548 6326.745 - 6351.951: 11.2325% ( 34) 00:07:41.548 6351.951 - 6377.157: 11.6040% ( 34) 00:07:41.548 6377.157 - 6402.363: 11.9318% ( 30) 00:07:41.548 6402.363 - 6427.569: 12.2268% ( 27) 00:07:41.548 6427.569 - 6452.775: 12.5109% ( 26) 00:07:41.548 6452.775 - 6503.188: 12.9152% ( 37) 00:07:41.548 6503.188 - 6553.600: 13.2430% ( 30) 00:07:41.548 6553.600 - 6604.012: 13.5271% ( 26) 00:07:41.548 6604.012 - 6654.425: 13.7128% ( 17) 00:07:41.548 6654.425 - 6704.837: 13.9642% ( 23) 00:07:41.548 6704.837 - 6755.249: 14.2592% ( 27) 00:07:41.549 6755.249 - 6805.662: 14.5105% ( 23) 00:07:41.549 6805.662 - 6856.074: 14.6744% ( 15) 00:07:41.549 6856.074 - 6906.486: 14.8601% ( 17) 00:07:41.549 6906.486 - 6956.898: 15.0022% ( 13) 00:07:41.549 6956.898 - 7007.311: 15.1661% ( 15) 00:07:41.549 7007.311 - 7057.723: 15.3518% ( 17) 00:07:41.549 7057.723 - 7108.135: 15.5922% ( 22) 00:07:41.549 7108.135 - 7158.548: 15.8217% ( 21) 00:07:41.549 7158.548 - 7208.960: 16.0184% ( 18) 00:07:41.549 7208.960 - 7259.372: 16.2806% ( 24) 00:07:41.549 7259.372 - 7309.785: 16.5319% ( 23) 00:07:41.549 7309.785 - 7360.197: 16.7614% ( 21) 00:07:41.549 7360.197 - 7410.609: 16.9471% ( 17) 00:07:41.549 7410.609 - 7461.022: 17.1547% ( 19) 00:07:41.549 7461.022 - 7511.434: 17.3623% ( 19) 00:07:41.549 7511.434 - 7561.846: 17.6136% ( 23) 00:07:41.549 7561.846 - 7612.258: 17.8649% ( 23) 00:07:41.549 7612.258 - 7662.671: 18.0398% ( 16) 00:07:41.549 7662.671 - 7713.083: 18.1709% ( 12) 00:07:41.549 7713.083 - 7763.495: 18.3020% ( 12) 00:07:41.549 7763.495 - 7813.908: 18.4441% ( 13) 00:07:41.549 7813.908 - 7864.320: 18.5752% ( 12) 00:07:41.549 7864.320 - 7914.732: 18.6844% ( 10) 00:07:41.549 7914.732 - 7965.145: 18.7500% ( 6) 00:07:41.549 7965.145 - 8015.557: 18.8265% ( 7) 00:07:41.549 8015.557 - 8065.969: 18.9030% ( 7) 00:07:41.549 8065.969 - 8116.382: 18.9904% ( 8) 00:07:41.549 8116.382 - 8166.794: 19.0887% ( 9) 00:07:41.549 8166.794 - 8217.206: 19.1652% ( 7) 00:07:41.549 8217.206 - 8267.618: 19.2526% ( 8) 00:07:41.549 8267.618 - 8318.031: 19.3400% ( 8) 00:07:41.549 8318.031 - 8368.443: 19.4165% ( 7) 00:07:41.549 8368.443 - 8418.855: 19.4712% ( 5) 00:07:41.549 8418.855 - 8469.268: 19.5258% ( 5) 00:07:41.549 8469.268 - 8519.680: 19.5586% ( 3) 00:07:41.549 8519.680 - 8570.092: 19.5695% ( 1) 00:07:41.549 8570.092 - 8620.505: 19.5804% ( 1) 00:07:41.549 8771.742 - 8822.154: 19.6132% ( 3) 00:07:41.549 8822.154 - 8872.566: 19.6241% ( 1) 00:07:41.549 8872.566 - 8922.978: 19.6460% ( 2) 00:07:41.549 8922.978 - 8973.391: 19.6788% ( 3) 00:07:41.549 8973.391 - 9023.803: 19.7225% ( 4) 00:07:41.549 9023.803 - 9074.215: 19.7443% ( 2) 00:07:41.549 9074.215 - 9124.628: 19.7880% ( 4) 00:07:41.549 9124.628 - 9175.040: 19.8427% ( 5) 00:07:41.549 9175.040 - 9225.452: 19.8864% ( 4) 00:07:41.549 9225.452 - 9275.865: 19.9410% ( 5) 00:07:41.549 9275.865 - 9326.277: 19.9847% ( 4) 00:07:41.549 9326.277 - 9376.689: 20.0284% ( 4) 00:07:41.549 9376.689 - 9427.102: 20.0721% ( 4) 00:07:41.549 9427.102 - 9477.514: 20.1267% ( 5) 00:07:41.549 9477.514 - 9527.926: 20.1814% ( 5) 00:07:41.549 9527.926 - 9578.338: 20.2251% ( 4) 00:07:41.549 9578.338 - 9628.751: 20.2688% ( 4) 00:07:41.549 9628.751 - 9679.163: 20.3125% ( 4) 00:07:41.549 9679.163 - 9729.575: 20.3671% ( 5) 00:07:41.549 9729.575 - 9779.988: 20.4218% ( 5) 00:07:41.549 9779.988 - 9830.400: 20.4655% ( 4) 00:07:41.549 9830.400 - 9880.812: 20.5092% ( 4) 00:07:41.549 9880.812 - 9931.225: 20.5529% ( 4) 00:07:41.549 9931.225 - 9981.637: 20.6075% ( 5) 00:07:41.549 9981.637 - 10032.049: 20.6622% ( 5) 00:07:41.549 10032.049 - 10082.462: 20.7059% ( 4) 00:07:41.549 10082.462 - 10132.874: 20.7496% ( 4) 00:07:41.549 10132.874 - 10183.286: 20.7823% ( 3) 00:07:41.549 10183.286 - 10233.698: 20.8042% ( 2) 00:07:41.549 10233.698 - 10284.111: 20.8260% ( 2) 00:07:41.549 10284.111 - 10334.523: 20.8479% ( 2) 00:07:41.549 10334.523 - 10384.935: 20.8698% ( 2) 00:07:41.549 10384.935 - 10435.348: 20.8916% ( 2) 00:07:41.549 10435.348 - 10485.760: 20.9135% ( 2) 00:07:41.549 10485.760 - 10536.172: 20.9353% ( 2) 00:07:41.549 10536.172 - 10586.585: 20.9572% ( 2) 00:07:41.549 10586.585 - 10636.997: 20.9790% ( 2) 00:07:41.549 10889.058 - 10939.471: 21.0227% ( 4) 00:07:41.549 10939.471 - 10989.883: 21.1648% ( 13) 00:07:41.549 10989.883 - 11040.295: 21.2194% ( 5) 00:07:41.549 11040.295 - 11090.708: 21.2631% ( 4) 00:07:41.549 11090.708 - 11141.120: 21.3942% ( 12) 00:07:41.549 11141.120 - 11191.532: 21.4926% ( 9) 00:07:41.549 11191.532 - 11241.945: 21.6018% ( 10) 00:07:41.549 11241.945 - 11292.357: 21.7220% ( 11) 00:07:41.549 11292.357 - 11342.769: 21.8422% ( 11) 00:07:41.549 11342.769 - 11393.182: 21.9624% ( 11) 00:07:41.549 11393.182 - 11443.594: 22.1154% ( 14) 00:07:41.549 11443.594 - 11494.006: 22.2793% ( 15) 00:07:41.549 11494.006 - 11544.418: 22.4869% ( 19) 00:07:41.549 11544.418 - 11594.831: 22.6945% ( 19) 00:07:41.549 11594.831 - 11645.243: 22.8584% ( 15) 00:07:41.549 11645.243 - 11695.655: 23.0441% ( 17) 00:07:41.549 11695.655 - 11746.068: 23.2517% ( 19) 00:07:41.549 11746.068 - 11796.480: 23.4594% ( 19) 00:07:41.549 11796.480 - 11846.892: 23.6451% ( 17) 00:07:41.549 11846.892 - 11897.305: 23.8090% ( 15) 00:07:41.549 11897.305 - 11947.717: 23.9948% ( 17) 00:07:41.549 11947.717 - 11998.129: 24.1587% ( 15) 00:07:41.549 11998.129 - 12048.542: 24.3226% ( 15) 00:07:41.549 12048.542 - 12098.954: 24.4865% ( 15) 00:07:41.549 12098.954 - 12149.366: 24.6941% ( 19) 00:07:41.549 12149.366 - 12199.778: 24.9563% ( 24) 00:07:41.549 12199.778 - 12250.191: 25.2622% ( 28) 00:07:41.549 12250.191 - 12300.603: 25.5573% ( 27) 00:07:41.549 12300.603 - 12351.015: 25.7867% ( 21) 00:07:41.549 12351.015 - 12401.428: 25.9943% ( 19) 00:07:41.549 12401.428 - 12451.840: 26.2128% ( 20) 00:07:41.549 12451.840 - 12502.252: 26.4423% ( 21) 00:07:41.549 12502.252 - 12552.665: 26.7810% ( 31) 00:07:41.549 12552.665 - 12603.077: 27.1088% ( 30) 00:07:41.549 12603.077 - 12653.489: 27.4694% ( 33) 00:07:41.549 12653.489 - 12703.902: 27.7644% ( 27) 00:07:41.549 12703.902 - 12754.314: 28.1469% ( 35) 00:07:41.549 12754.314 - 12804.726: 28.5730% ( 39) 00:07:41.549 12804.726 - 12855.138: 28.9882% ( 38) 00:07:41.549 12855.138 - 12905.551: 29.4253% ( 40) 00:07:41.549 12905.551 - 13006.375: 30.4851% ( 97) 00:07:41.549 13006.375 - 13107.200: 31.4795% ( 91) 00:07:41.549 13107.200 - 13208.025: 32.3864% ( 83) 00:07:41.549 13208.025 - 13308.849: 33.2168% ( 76) 00:07:41.549 13308.849 - 13409.674: 33.9598% ( 68) 00:07:41.549 13409.674 - 13510.498: 34.7902% ( 76) 00:07:41.549 13510.498 - 13611.323: 35.6097% ( 75) 00:07:41.549 13611.323 - 13712.148: 36.3746% ( 70) 00:07:41.549 13712.148 - 13812.972: 37.1503% ( 71) 00:07:41.549 13812.972 - 13913.797: 38.0682% ( 84) 00:07:41.549 13913.797 - 14014.622: 38.9205% ( 78) 00:07:41.549 14014.622 - 14115.446: 39.7181% ( 73) 00:07:41.549 14115.446 - 14216.271: 40.5922% ( 80) 00:07:41.549 14216.271 - 14317.095: 41.3899% ( 73) 00:07:41.549 14317.095 - 14417.920: 42.2203% ( 76) 00:07:41.549 14417.920 - 14518.745: 43.0507% ( 76) 00:07:41.549 14518.745 - 14619.569: 44.0013% ( 87) 00:07:41.549 14619.569 - 14720.394: 45.1049% ( 101) 00:07:41.549 14720.394 - 14821.218: 46.3287% ( 112) 00:07:41.549 14821.218 - 14922.043: 47.6836% ( 124) 00:07:41.549 14922.043 - 15022.868: 49.0275% ( 123) 00:07:41.549 15022.868 - 15123.692: 50.2404% ( 111) 00:07:41.549 15123.692 - 15224.517: 51.5188% ( 117) 00:07:41.549 15224.517 - 15325.342: 52.9720% ( 133) 00:07:41.549 15325.342 - 15426.166: 54.5455% ( 144) 00:07:41.549 15426.166 - 15526.991: 56.1735% ( 149) 00:07:41.549 15526.991 - 15627.815: 57.7142% ( 141) 00:07:41.549 15627.815 - 15728.640: 59.1565% ( 132) 00:07:41.549 15728.640 - 15829.465: 60.5878% ( 131) 00:07:41.549 15829.465 - 15930.289: 62.1285% ( 141) 00:07:41.549 15930.289 - 16031.114: 63.5380% ( 129) 00:07:41.549 16031.114 - 16131.938: 64.9585% ( 130) 00:07:41.549 16131.938 - 16232.763: 66.3571% ( 128) 00:07:41.549 16232.763 - 16333.588: 67.6355% ( 117) 00:07:41.549 16333.588 - 16434.412: 68.8265% ( 109) 00:07:41.549 16434.412 - 16535.237: 70.1486% ( 121) 00:07:41.549 16535.237 - 16636.062: 71.5472% ( 128) 00:07:41.549 16636.062 - 16736.886: 72.7601% ( 111) 00:07:41.549 16736.886 - 16837.711: 73.9729% ( 111) 00:07:41.549 16837.711 - 16938.535: 75.3824% ( 129) 00:07:41.549 16938.535 - 17039.360: 76.6062% ( 112) 00:07:41.549 17039.360 - 17140.185: 77.9502% ( 123) 00:07:41.549 17140.185 - 17241.009: 79.2723% ( 121) 00:07:41.549 17241.009 - 17341.834: 80.5288% ( 115) 00:07:41.549 17341.834 - 17442.658: 81.8291% ( 119) 00:07:41.549 17442.658 - 17543.483: 82.9983% ( 107) 00:07:41.549 17543.483 - 17644.308: 84.1128% ( 102) 00:07:41.549 17644.308 - 17745.132: 85.1399% ( 94) 00:07:41.549 17745.132 - 17845.957: 86.2434% ( 101) 00:07:41.549 17845.957 - 17946.782: 87.1394% ( 82) 00:07:41.549 17946.782 - 18047.606: 87.9698% ( 76) 00:07:41.549 18047.606 - 18148.431: 88.8658% ( 82) 00:07:41.549 18148.431 - 18249.255: 89.7946% ( 85) 00:07:41.549 18249.255 - 18350.080: 90.5485% ( 69) 00:07:41.549 18350.080 - 18450.905: 91.1604% ( 56) 00:07:41.549 18450.905 - 18551.729: 91.7395% ( 53) 00:07:41.549 18551.729 - 18652.554: 92.2858% ( 50) 00:07:41.549 18652.554 - 18753.378: 92.7557% ( 43) 00:07:41.549 18753.378 - 18854.203: 93.3676% ( 56) 00:07:41.549 18854.203 - 18955.028: 93.8920% ( 48) 00:07:41.549 18955.028 - 19055.852: 94.3291% ( 40) 00:07:41.549 19055.852 - 19156.677: 94.6788% ( 32) 00:07:41.549 19156.677 - 19257.502: 95.0284% ( 32) 00:07:41.549 19257.502 - 19358.326: 95.3344% ( 28) 00:07:41.549 19358.326 - 19459.151: 95.6075% ( 25) 00:07:41.549 19459.151 - 19559.975: 95.8916% ( 26) 00:07:41.549 19559.975 - 19660.800: 96.1757% ( 26) 00:07:41.549 19660.800 - 19761.625: 96.4161% ( 22) 00:07:41.549 19761.625 - 19862.449: 96.6674% ( 23) 00:07:41.549 19862.449 - 19963.274: 96.9296% ( 24) 00:07:41.549 19963.274 - 20064.098: 97.1809% ( 23) 00:07:41.549 20064.098 - 20164.923: 97.4323% ( 23) 00:07:41.549 20164.923 - 20265.748: 97.6945% ( 24) 00:07:41.550 20265.748 - 20366.572: 97.9240% ( 21) 00:07:41.550 20366.572 - 20467.397: 98.0988% ( 16) 00:07:41.550 20467.397 - 20568.222: 98.2627% ( 15) 00:07:41.550 20568.222 - 20669.046: 98.4375% ( 16) 00:07:41.550 20669.046 - 20769.871: 98.5249% ( 8) 00:07:41.550 20769.871 - 20870.695: 98.5905% ( 6) 00:07:41.550 20870.695 - 20971.520: 98.6014% ( 1) 00:07:41.550 23693.785 - 23794.609: 98.6123% ( 1) 00:07:41.550 23794.609 - 23895.434: 98.6560% ( 4) 00:07:41.550 23895.434 - 23996.258: 98.6997% ( 4) 00:07:41.550 23996.258 - 24097.083: 98.7544% ( 5) 00:07:41.550 24097.083 - 24197.908: 98.7981% ( 4) 00:07:41.550 24197.908 - 24298.732: 98.8418% ( 4) 00:07:41.550 24298.732 - 24399.557: 98.8855% ( 4) 00:07:41.550 24399.557 - 24500.382: 98.9292% ( 4) 00:07:41.550 24500.382 - 24601.206: 98.9729% ( 4) 00:07:41.550 24601.206 - 24702.031: 99.0166% ( 4) 00:07:41.550 24702.031 - 24802.855: 99.0603% ( 4) 00:07:41.550 24802.855 - 24903.680: 99.1040% ( 4) 00:07:41.550 24903.680 - 25004.505: 99.1587% ( 5) 00:07:41.550 25004.505 - 25105.329: 99.2024% ( 4) 00:07:41.550 25105.329 - 25206.154: 99.2461% ( 4) 00:07:41.550 25206.154 - 25306.978: 99.2898% ( 4) 00:07:41.550 25306.978 - 25407.803: 99.3007% ( 1) 00:07:41.550 35490.265 - 35691.914: 99.3444% ( 4) 00:07:41.550 35691.914 - 35893.563: 99.3990% ( 5) 00:07:41.550 35893.563 - 36095.212: 99.4646% ( 6) 00:07:41.550 36095.212 - 36296.862: 99.5192% ( 5) 00:07:41.550 36296.862 - 36498.511: 99.5848% ( 6) 00:07:41.550 36498.511 - 36700.160: 99.6285% ( 4) 00:07:41.550 36700.160 - 36901.809: 99.6831% ( 5) 00:07:41.550 36901.809 - 37103.458: 99.7487% ( 6) 00:07:41.550 37103.458 - 37305.108: 99.8033% ( 5) 00:07:41.550 37305.108 - 37506.757: 99.8580% ( 5) 00:07:41.550 37506.757 - 37708.406: 99.9235% ( 6) 00:07:41.550 37708.406 - 37910.055: 99.9891% ( 6) 00:07:41.550 37910.055 - 38111.705: 100.0000% ( 1) 00:07:41.550 00:07:41.550 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:41.550 ============================================================================== 00:07:41.550 Range in us Cumulative IO count 00:07:41.550 5444.529 - 5469.735: 0.0109% ( 1) 00:07:41.550 5469.735 - 5494.942: 0.0434% ( 3) 00:07:41.550 5494.942 - 5520.148: 0.0868% ( 4) 00:07:41.550 5520.148 - 5545.354: 0.1736% ( 8) 00:07:41.550 5545.354 - 5570.560: 0.2930% ( 11) 00:07:41.550 5570.560 - 5595.766: 0.4340% ( 13) 00:07:41.550 5595.766 - 5620.972: 0.5968% ( 15) 00:07:41.550 5620.972 - 5646.178: 0.7921% ( 18) 00:07:41.550 5646.178 - 5671.385: 1.0742% ( 26) 00:07:41.550 5671.385 - 5696.591: 1.3238% ( 23) 00:07:41.550 5696.591 - 5721.797: 1.5842% ( 24) 00:07:41.550 5721.797 - 5747.003: 1.8880% ( 28) 00:07:41.550 5747.003 - 5772.209: 2.2027% ( 29) 00:07:41.550 5772.209 - 5797.415: 2.5499% ( 32) 00:07:41.550 5797.415 - 5822.622: 2.8971% ( 32) 00:07:41.550 5822.622 - 5847.828: 3.2552% ( 33) 00:07:41.550 5847.828 - 5873.034: 3.5916% ( 31) 00:07:41.550 5873.034 - 5898.240: 3.9605% ( 34) 00:07:41.550 5898.240 - 5923.446: 4.3077% ( 32) 00:07:41.550 5923.446 - 5948.652: 4.6549% ( 32) 00:07:41.550 5948.652 - 5973.858: 5.0347% ( 35) 00:07:41.550 5973.858 - 5999.065: 5.4470% ( 38) 00:07:41.550 5999.065 - 6024.271: 5.8377% ( 36) 00:07:41.550 6024.271 - 6049.477: 6.2283% ( 36) 00:07:41.550 6049.477 - 6074.683: 6.6189% ( 36) 00:07:41.550 6074.683 - 6099.889: 7.0530% ( 40) 00:07:41.550 6099.889 - 6125.095: 7.4761% ( 39) 00:07:41.550 6125.095 - 6150.302: 7.8885% ( 38) 00:07:41.550 6150.302 - 6175.508: 8.3116% ( 39) 00:07:41.550 6175.508 - 6200.714: 8.7348% ( 39) 00:07:41.550 6200.714 - 6225.920: 9.1254% ( 36) 00:07:41.550 6225.920 - 6251.126: 9.5378% ( 38) 00:07:41.550 6251.126 - 6276.332: 9.9392% ( 37) 00:07:41.550 6276.332 - 6301.538: 10.3516% ( 38) 00:07:41.550 6301.538 - 6326.745: 10.7747% ( 39) 00:07:41.550 6326.745 - 6351.951: 11.1654% ( 36) 00:07:41.550 6351.951 - 6377.157: 11.4909% ( 30) 00:07:41.550 6377.157 - 6402.363: 11.8164% ( 30) 00:07:41.550 6402.363 - 6427.569: 12.0985% ( 26) 00:07:41.550 6427.569 - 6452.775: 12.3698% ( 25) 00:07:41.550 6452.775 - 6503.188: 12.8364% ( 43) 00:07:41.550 6503.188 - 6553.600: 13.1944% ( 33) 00:07:41.550 6553.600 - 6604.012: 13.4657% ( 25) 00:07:41.550 6604.012 - 6654.425: 13.6610% ( 18) 00:07:41.550 6654.425 - 6704.837: 13.8455% ( 17) 00:07:41.550 6704.837 - 6755.249: 14.1168% ( 25) 00:07:41.550 6755.249 - 6805.662: 14.3555% ( 22) 00:07:41.550 6805.662 - 6856.074: 14.6376% ( 26) 00:07:41.550 6856.074 - 6906.486: 14.8112% ( 16) 00:07:41.550 6906.486 - 6956.898: 14.9306% ( 11) 00:07:41.550 6956.898 - 7007.311: 15.1259% ( 18) 00:07:41.550 7007.311 - 7057.723: 15.3103% ( 17) 00:07:41.550 7057.723 - 7108.135: 15.5056% ( 18) 00:07:41.550 7108.135 - 7158.548: 15.6901% ( 17) 00:07:41.550 7158.548 - 7208.960: 15.9288% ( 22) 00:07:41.550 7208.960 - 7259.372: 16.1784% ( 23) 00:07:41.550 7259.372 - 7309.785: 16.3737% ( 18) 00:07:41.550 7309.785 - 7360.197: 16.5473% ( 16) 00:07:41.550 7360.197 - 7410.609: 16.7426% ( 18) 00:07:41.550 7410.609 - 7461.022: 16.9596% ( 20) 00:07:41.550 7461.022 - 7511.434: 17.0898% ( 12) 00:07:41.550 7511.434 - 7561.846: 17.2635% ( 16) 00:07:41.550 7561.846 - 7612.258: 17.4371% ( 16) 00:07:41.550 7612.258 - 7662.671: 17.5998% ( 15) 00:07:41.550 7662.671 - 7713.083: 17.7843% ( 17) 00:07:41.550 7713.083 - 7763.495: 18.0122% ( 21) 00:07:41.550 7763.495 - 7813.908: 18.1315% ( 11) 00:07:41.550 7813.908 - 7864.320: 18.2075% ( 7) 00:07:41.550 7864.320 - 7914.732: 18.2943% ( 8) 00:07:41.550 7914.732 - 7965.145: 18.3811% ( 8) 00:07:41.550 7965.145 - 8015.557: 18.4787% ( 9) 00:07:41.550 8015.557 - 8065.969: 18.5764% ( 9) 00:07:41.550 8065.969 - 8116.382: 18.6306% ( 5) 00:07:41.550 8116.382 - 8166.794: 18.7066% ( 7) 00:07:41.550 8166.794 - 8217.206: 18.7609% ( 5) 00:07:41.550 8217.206 - 8267.618: 18.8260% ( 6) 00:07:41.550 8267.618 - 8318.031: 18.8911% ( 6) 00:07:41.550 8318.031 - 8368.443: 18.9453% ( 5) 00:07:41.550 8368.443 - 8418.855: 19.0104% ( 6) 00:07:41.550 8418.855 - 8469.268: 19.1081% ( 9) 00:07:41.550 8469.268 - 8519.680: 19.1949% ( 8) 00:07:41.550 8519.680 - 8570.092: 19.3359% ( 13) 00:07:41.550 8570.092 - 8620.505: 19.4444% ( 10) 00:07:41.550 8620.505 - 8670.917: 19.5530% ( 10) 00:07:41.550 8670.917 - 8721.329: 19.6398% ( 8) 00:07:41.550 8721.329 - 8771.742: 19.7049% ( 6) 00:07:41.550 8771.742 - 8822.154: 19.7483% ( 4) 00:07:41.550 8822.154 - 8872.566: 19.8134% ( 6) 00:07:41.550 8872.566 - 8922.978: 19.9002% ( 8) 00:07:41.550 8922.978 - 8973.391: 19.9544% ( 5) 00:07:41.550 8973.391 - 9023.803: 19.9978% ( 4) 00:07:41.550 9023.803 - 9074.215: 20.0412% ( 4) 00:07:41.550 9074.215 - 9124.628: 20.0955% ( 5) 00:07:41.550 9124.628 - 9175.040: 20.1497% ( 5) 00:07:41.550 9175.040 - 9225.452: 20.1931% ( 4) 00:07:41.550 9225.452 - 9275.865: 20.2365% ( 4) 00:07:41.550 9275.865 - 9326.277: 20.2908% ( 5) 00:07:41.550 9326.277 - 9376.689: 20.3342% ( 4) 00:07:41.550 9376.689 - 9427.102: 20.3776% ( 4) 00:07:41.550 9427.102 - 9477.514: 20.4427% ( 6) 00:07:41.550 9477.514 - 9527.926: 20.4970% ( 5) 00:07:41.550 9527.926 - 9578.338: 20.5404% ( 4) 00:07:41.550 9578.338 - 9628.751: 20.5838% ( 4) 00:07:41.550 9628.751 - 9679.163: 20.6380% ( 5) 00:07:41.550 9679.163 - 9729.575: 20.6814% ( 4) 00:07:41.550 9729.575 - 9779.988: 20.7357% ( 5) 00:07:41.550 9779.988 - 9830.400: 20.7682% ( 3) 00:07:41.550 9830.400 - 9880.812: 20.7899% ( 2) 00:07:41.550 9880.812 - 9931.225: 20.8116% ( 2) 00:07:41.550 9931.225 - 9981.637: 20.8333% ( 2) 00:07:41.550 10233.698 - 10284.111: 20.8659% ( 3) 00:07:41.550 10284.111 - 10334.523: 20.9093% ( 4) 00:07:41.550 10334.523 - 10384.935: 20.9527% ( 4) 00:07:41.550 10384.935 - 10435.348: 20.9744% ( 2) 00:07:41.550 10435.348 - 10485.760: 21.0069% ( 3) 00:07:41.550 10485.760 - 10536.172: 21.0286% ( 2) 00:07:41.550 10536.172 - 10586.585: 21.0503% ( 2) 00:07:41.550 10586.585 - 10636.997: 21.0720% ( 2) 00:07:41.550 10636.997 - 10687.409: 21.0938% ( 2) 00:07:41.550 10687.409 - 10737.822: 21.1155% ( 2) 00:07:41.550 10737.822 - 10788.234: 21.1372% ( 2) 00:07:41.550 10788.234 - 10838.646: 21.1697% ( 3) 00:07:41.550 10838.646 - 10889.058: 21.2023% ( 3) 00:07:41.550 10889.058 - 10939.471: 21.2348% ( 3) 00:07:41.550 10939.471 - 10989.883: 21.2565% ( 2) 00:07:41.550 10989.883 - 11040.295: 21.2782% ( 2) 00:07:41.550 11040.295 - 11090.708: 21.2999% ( 2) 00:07:41.550 11090.708 - 11141.120: 21.3542% ( 5) 00:07:41.550 11141.120 - 11191.532: 21.3976% ( 4) 00:07:41.550 11191.532 - 11241.945: 21.5820% ( 17) 00:07:41.550 11241.945 - 11292.357: 21.7665% ( 17) 00:07:41.550 11292.357 - 11342.769: 21.8750% ( 10) 00:07:41.550 11342.769 - 11393.182: 22.0378% ( 15) 00:07:41.550 11393.182 - 11443.594: 22.1680% ( 12) 00:07:41.550 11443.594 - 11494.006: 22.3199% ( 14) 00:07:41.550 11494.006 - 11544.418: 22.4392% ( 11) 00:07:41.550 11544.418 - 11594.831: 22.6020% ( 15) 00:07:41.550 11594.831 - 11645.243: 22.7431% ( 13) 00:07:41.550 11645.243 - 11695.655: 22.8950% ( 14) 00:07:41.550 11695.655 - 11746.068: 23.0794% ( 17) 00:07:41.550 11746.068 - 11796.480: 23.2096% ( 12) 00:07:41.550 11796.480 - 11846.892: 23.3615% ( 14) 00:07:41.550 11846.892 - 11897.305: 23.5243% ( 15) 00:07:41.550 11897.305 - 11947.717: 23.6871% ( 15) 00:07:41.550 11947.717 - 11998.129: 23.8498% ( 15) 00:07:41.550 11998.129 - 12048.542: 24.0560% ( 19) 00:07:41.550 12048.542 - 12098.954: 24.2405% ( 17) 00:07:41.551 12098.954 - 12149.366: 24.4683% ( 21) 00:07:41.551 12149.366 - 12199.778: 24.6962% ( 21) 00:07:41.551 12199.778 - 12250.191: 24.8698% ( 16) 00:07:41.551 12250.191 - 12300.603: 25.0543% ( 17) 00:07:41.551 12300.603 - 12351.015: 25.2821% ( 21) 00:07:41.551 12351.015 - 12401.428: 25.5100% ( 21) 00:07:41.551 12401.428 - 12451.840: 25.7704% ( 24) 00:07:41.551 12451.840 - 12502.252: 26.0634% ( 27) 00:07:41.551 12502.252 - 12552.665: 26.4106% ( 32) 00:07:41.551 12552.665 - 12603.077: 26.7904% ( 35) 00:07:41.551 12603.077 - 12653.489: 27.1701% ( 35) 00:07:41.551 12653.489 - 12703.902: 27.6801% ( 47) 00:07:41.551 12703.902 - 12754.314: 28.1576% ( 44) 00:07:41.551 12754.314 - 12804.726: 28.6567% ( 46) 00:07:41.551 12804.726 - 12855.138: 29.1124% ( 42) 00:07:41.551 12855.138 - 12905.551: 29.5356% ( 39) 00:07:41.551 12905.551 - 13006.375: 30.3602% ( 76) 00:07:41.551 13006.375 - 13107.200: 31.2717% ( 84) 00:07:41.551 13107.200 - 13208.025: 32.3025% ( 95) 00:07:41.551 13208.025 - 13308.849: 33.2140% ( 84) 00:07:41.551 13308.849 - 13409.674: 34.1363% ( 85) 00:07:41.551 13409.674 - 13510.498: 34.9175% ( 72) 00:07:41.551 13510.498 - 13611.323: 35.7856% ( 80) 00:07:41.551 13611.323 - 13712.148: 36.5994% ( 75) 00:07:41.551 13712.148 - 13812.972: 37.4674% ( 80) 00:07:41.551 13812.972 - 13913.797: 38.3247% ( 79) 00:07:41.551 13913.797 - 14014.622: 39.1385% ( 75) 00:07:41.551 14014.622 - 14115.446: 39.7678% ( 58) 00:07:41.551 14115.446 - 14216.271: 40.5056% ( 68) 00:07:41.551 14216.271 - 14317.095: 41.3086% ( 74) 00:07:41.551 14317.095 - 14417.920: 42.2743% ( 89) 00:07:41.551 14417.920 - 14518.745: 43.2400% ( 89) 00:07:41.551 14518.745 - 14619.569: 44.1949% ( 88) 00:07:41.551 14619.569 - 14720.394: 45.2582% ( 98) 00:07:41.551 14720.394 - 14821.218: 46.3650% ( 102) 00:07:41.551 14821.218 - 14922.043: 47.7214% ( 125) 00:07:41.551 14922.043 - 15022.868: 49.0234% ( 120) 00:07:41.551 15022.868 - 15123.692: 50.6076% ( 146) 00:07:41.551 15123.692 - 15224.517: 51.9640% ( 125) 00:07:41.551 15224.517 - 15325.342: 53.2227% ( 116) 00:07:41.551 15325.342 - 15426.166: 54.5681% ( 124) 00:07:41.551 15426.166 - 15526.991: 56.1957% ( 150) 00:07:41.551 15526.991 - 15627.815: 57.6389% ( 133) 00:07:41.551 15627.815 - 15728.640: 58.7565% ( 103) 00:07:41.551 15728.640 - 15829.465: 59.9609% ( 111) 00:07:41.551 15829.465 - 15930.289: 61.1979% ( 114) 00:07:41.551 15930.289 - 16031.114: 62.4457% ( 115) 00:07:41.551 16031.114 - 16131.938: 63.7478% ( 120) 00:07:41.551 16131.938 - 16232.763: 65.0825% ( 123) 00:07:41.551 16232.763 - 16333.588: 66.3520% ( 117) 00:07:41.551 16333.588 - 16434.412: 67.5998% ( 115) 00:07:41.551 16434.412 - 16535.237: 68.9019% ( 120) 00:07:41.551 16535.237 - 16636.062: 70.1823% ( 118) 00:07:41.551 16636.062 - 16736.886: 71.4735% ( 119) 00:07:41.551 16736.886 - 16837.711: 72.6997% ( 113) 00:07:41.551 16837.711 - 16938.535: 74.0126% ( 121) 00:07:41.551 16938.535 - 17039.360: 75.4015% ( 128) 00:07:41.551 17039.360 - 17140.185: 77.1701% ( 163) 00:07:41.551 17140.185 - 17241.009: 78.6675% ( 138) 00:07:41.551 17241.009 - 17341.834: 80.1432% ( 136) 00:07:41.551 17341.834 - 17442.658: 81.5213% ( 127) 00:07:41.551 17442.658 - 17543.483: 82.6823% ( 107) 00:07:41.551 17543.483 - 17644.308: 83.8325% ( 106) 00:07:41.551 17644.308 - 17745.132: 85.0152% ( 109) 00:07:41.551 17745.132 - 17845.957: 86.2305% ( 112) 00:07:41.551 17845.957 - 17946.782: 87.4891% ( 116) 00:07:41.551 17946.782 - 18047.606: 88.4766% ( 91) 00:07:41.551 18047.606 - 18148.431: 89.5399% ( 98) 00:07:41.551 18148.431 - 18249.255: 90.4839% ( 87) 00:07:41.551 18249.255 - 18350.080: 91.2869% ( 74) 00:07:41.551 18350.080 - 18450.905: 92.0464% ( 70) 00:07:41.551 18450.905 - 18551.729: 92.7409% ( 64) 00:07:41.551 18551.729 - 18652.554: 93.4245% ( 63) 00:07:41.551 18652.554 - 18753.378: 93.9345% ( 47) 00:07:41.551 18753.378 - 18854.203: 94.3685% ( 40) 00:07:41.551 18854.203 - 18955.028: 94.7808% ( 38) 00:07:41.551 18955.028 - 19055.852: 95.1606% ( 35) 00:07:41.551 19055.852 - 19156.677: 95.5187% ( 33) 00:07:41.551 19156.677 - 19257.502: 95.8767% ( 33) 00:07:41.551 19257.502 - 19358.326: 96.2131% ( 31) 00:07:41.551 19358.326 - 19459.151: 96.5603% ( 32) 00:07:41.551 19459.151 - 19559.975: 96.8967% ( 31) 00:07:41.551 19559.975 - 19660.800: 97.1897% ( 27) 00:07:41.551 19660.800 - 19761.625: 97.5043% ( 29) 00:07:41.551 19761.625 - 19862.449: 97.7431% ( 22) 00:07:41.551 19862.449 - 19963.274: 98.0035% ( 24) 00:07:41.551 19963.274 - 20064.098: 98.2096% ( 19) 00:07:41.551 20064.098 - 20164.923: 98.4266% ( 20) 00:07:41.551 20164.923 - 20265.748: 98.6220% ( 18) 00:07:41.551 20265.748 - 20366.572: 98.8498% ( 21) 00:07:41.551 20366.572 - 20467.397: 99.0234% ( 16) 00:07:41.551 20467.397 - 20568.222: 99.1536% ( 12) 00:07:41.551 20568.222 - 20669.046: 99.1970% ( 4) 00:07:41.551 20669.046 - 20769.871: 99.2405% ( 4) 00:07:41.551 20769.871 - 20870.695: 99.2947% ( 5) 00:07:41.551 20870.695 - 20971.520: 99.3056% ( 1) 00:07:41.551 24197.908 - 24298.732: 99.3490% ( 4) 00:07:41.551 24298.732 - 24399.557: 99.3815% ( 3) 00:07:41.551 24399.557 - 24500.382: 99.4249% ( 4) 00:07:41.551 24500.382 - 24601.206: 99.4683% ( 4) 00:07:41.551 24601.206 - 24702.031: 99.5226% ( 5) 00:07:41.551 24702.031 - 24802.855: 99.5660% ( 4) 00:07:41.551 24802.855 - 24903.680: 99.6094% ( 4) 00:07:41.551 24903.680 - 25004.505: 99.6528% ( 4) 00:07:41.551 25004.505 - 25105.329: 99.6962% ( 4) 00:07:41.551 25105.329 - 25206.154: 99.7504% ( 5) 00:07:41.551 25206.154 - 25306.978: 99.7830% ( 3) 00:07:41.551 25306.978 - 25407.803: 99.8264% ( 4) 00:07:41.551 25407.803 - 25508.628: 99.8698% ( 4) 00:07:41.551 25508.628 - 25609.452: 99.9132% ( 4) 00:07:41.551 25609.452 - 25710.277: 99.9566% ( 4) 00:07:41.551 25710.277 - 25811.102: 100.0000% ( 4) 00:07:41.551 00:07:41.551 10:06:47 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:42.928 Initializing NVMe Controllers 00:07:42.928 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.928 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.928 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.928 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.928 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:42.928 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:42.928 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:42.928 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:42.928 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:42.928 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:42.928 Initialization complete. Launching workers. 00:07:42.928 ======================================================== 00:07:42.928 Latency(us) 00:07:42.928 Device Information : IOPS MiB/s Average min max 00:07:42.928 PCIE (0000:00:10.0) NSID 1 from core 0: 8881.59 104.08 14451.59 6480.30 37653.33 00:07:42.928 PCIE (0000:00:11.0) NSID 1 from core 0: 8881.59 104.08 14434.15 6617.65 36274.87 00:07:42.928 PCIE (0000:00:13.0) NSID 1 from core 0: 8881.59 104.08 14416.72 6496.09 35700.58 00:07:42.928 PCIE (0000:00:12.0) NSID 1 from core 0: 8881.59 104.08 14398.55 6588.80 34328.77 00:07:42.928 PCIE (0000:00:12.0) NSID 2 from core 0: 8881.59 104.08 14376.10 6598.77 33087.57 00:07:42.928 PCIE (0000:00:12.0) NSID 3 from core 0: 8945.49 104.83 14251.02 6534.76 24967.75 00:07:42.928 ======================================================== 00:07:42.928 Total : 53353.46 625.24 14387.86 6480.30 37653.33 00:07:42.928 00:07:42.928 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.928 ================================================================================= 00:07:42.928 1.00000% : 6906.486us 00:07:42.928 10.00000% : 11594.831us 00:07:42.928 25.00000% : 12653.489us 00:07:42.928 50.00000% : 14216.271us 00:07:42.928 75.00000% : 16031.114us 00:07:42.928 90.00000% : 17745.132us 00:07:42.928 95.00000% : 18753.378us 00:07:42.928 98.00000% : 20265.748us 00:07:42.928 99.00000% : 28230.892us 00:07:42.928 99.50000% : 36095.212us 00:07:42.928 99.90000% : 37506.757us 00:07:42.928 99.99000% : 37708.406us 00:07:42.928 99.99900% : 37708.406us 00:07:42.928 99.99990% : 37708.406us 00:07:42.928 99.99999% : 37708.406us 00:07:42.928 00:07:42.928 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.928 ================================================================================= 00:07:42.928 1.00000% : 7007.311us 00:07:42.928 10.00000% : 11494.006us 00:07:42.928 25.00000% : 12754.314us 00:07:42.928 50.00000% : 14115.446us 00:07:42.928 75.00000% : 16031.114us 00:07:42.928 90.00000% : 17644.308us 00:07:42.928 95.00000% : 18955.028us 00:07:42.928 98.00000% : 20265.748us 00:07:42.928 99.00000% : 27625.945us 00:07:42.928 99.50000% : 34885.317us 00:07:42.928 99.90000% : 36095.212us 00:07:42.928 99.99000% : 36296.862us 00:07:42.928 99.99900% : 36296.862us 00:07:42.928 99.99990% : 36296.862us 00:07:42.928 99.99999% : 36296.862us 00:07:42.928 00:07:42.928 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.928 ================================================================================= 00:07:42.928 1.00000% : 6956.898us 00:07:42.928 10.00000% : 11443.594us 00:07:42.928 25.00000% : 12552.665us 00:07:42.928 50.00000% : 14115.446us 00:07:42.928 75.00000% : 16232.763us 00:07:42.928 90.00000% : 17745.132us 00:07:42.928 95.00000% : 19156.677us 00:07:42.928 98.00000% : 20265.748us 00:07:42.928 99.00000% : 27020.997us 00:07:42.928 99.50000% : 34280.369us 00:07:42.928 99.90000% : 35490.265us 00:07:42.928 99.99000% : 35893.563us 00:07:42.928 99.99900% : 35893.563us 00:07:42.928 99.99990% : 35893.563us 00:07:42.928 99.99999% : 35893.563us 00:07:42.928 00:07:42.929 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.929 ================================================================================= 00:07:42.929 1.00000% : 6956.898us 00:07:42.929 10.00000% : 11594.831us 00:07:42.929 25.00000% : 12603.077us 00:07:42.929 50.00000% : 14115.446us 00:07:42.929 75.00000% : 16131.938us 00:07:42.929 90.00000% : 17644.308us 00:07:42.929 95.00000% : 19055.852us 00:07:42.929 98.00000% : 20064.098us 00:07:42.929 99.00000% : 25710.277us 00:07:42.929 99.50000% : 33070.474us 00:07:42.929 99.90000% : 34280.369us 00:07:42.929 99.99000% : 34482.018us 00:07:42.929 99.99900% : 34482.018us 00:07:42.929 99.99990% : 34482.018us 00:07:42.929 99.99999% : 34482.018us 00:07:42.929 00:07:42.929 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.929 ================================================================================= 00:07:42.929 1.00000% : 7007.311us 00:07:42.929 10.00000% : 11544.418us 00:07:42.929 25.00000% : 12653.489us 00:07:42.929 50.00000% : 14115.446us 00:07:42.929 75.00000% : 16131.938us 00:07:42.929 90.00000% : 17644.308us 00:07:42.929 95.00000% : 18753.378us 00:07:42.929 98.00000% : 20064.098us 00:07:42.929 99.00000% : 24399.557us 00:07:42.929 99.50000% : 31658.929us 00:07:42.929 99.90000% : 32868.825us 00:07:42.929 99.99000% : 33272.123us 00:07:42.929 99.99900% : 33272.123us 00:07:42.929 99.99990% : 33272.123us 00:07:42.929 99.99999% : 33272.123us 00:07:42.929 00:07:42.929 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.929 ================================================================================= 00:07:42.929 1.00000% : 7007.311us 00:07:42.929 10.00000% : 11494.006us 00:07:42.929 25.00000% : 12703.902us 00:07:42.929 50.00000% : 14014.622us 00:07:42.929 75.00000% : 16031.114us 00:07:42.929 90.00000% : 17543.483us 00:07:42.929 95.00000% : 18350.080us 00:07:42.929 98.00000% : 19459.151us 00:07:42.929 99.00000% : 20064.098us 00:07:42.929 99.50000% : 23492.135us 00:07:42.929 99.90000% : 24802.855us 00:07:42.929 99.99000% : 25004.505us 00:07:42.929 99.99900% : 25004.505us 00:07:42.929 99.99990% : 25004.505us 00:07:42.929 99.99999% : 25004.505us 00:07:42.929 00:07:42.929 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.929 ============================================================================== 00:07:42.929 Range in us Cumulative IO count 00:07:42.929 6452.775 - 6503.188: 0.0112% ( 1) 00:07:42.929 6503.188 - 6553.600: 0.0225% ( 1) 00:07:42.929 6553.600 - 6604.012: 0.0337% ( 1) 00:07:42.929 6604.012 - 6654.425: 0.0450% ( 1) 00:07:42.929 6654.425 - 6704.837: 0.0674% ( 2) 00:07:42.929 6704.837 - 6755.249: 0.2361% ( 15) 00:07:42.929 6755.249 - 6805.662: 0.4834% ( 22) 00:07:42.929 6805.662 - 6856.074: 0.7194% ( 21) 00:07:42.929 6856.074 - 6906.486: 1.0567% ( 30) 00:07:42.929 6906.486 - 6956.898: 1.3152% ( 23) 00:07:42.929 6956.898 - 7007.311: 1.4613% ( 13) 00:07:42.929 7007.311 - 7057.723: 1.5175% ( 5) 00:07:42.929 7057.723 - 7108.135: 1.6075% ( 8) 00:07:42.929 7108.135 - 7158.548: 1.6749% ( 6) 00:07:42.929 7158.548 - 7208.960: 1.7424% ( 6) 00:07:42.929 7208.960 - 7259.372: 1.8098% ( 6) 00:07:42.929 7259.372 - 7309.785: 1.8885% ( 7) 00:07:42.929 7309.785 - 7360.197: 2.0121% ( 11) 00:07:42.929 7360.197 - 7410.609: 2.1246% ( 10) 00:07:42.929 7410.609 - 7461.022: 2.3606% ( 21) 00:07:42.929 7461.022 - 7511.434: 2.5067% ( 13) 00:07:42.929 7511.434 - 7561.846: 2.6416% ( 12) 00:07:42.929 7561.846 - 7612.258: 2.6866% ( 4) 00:07:42.929 7612.258 - 7662.671: 2.7653% ( 7) 00:07:42.929 7662.671 - 7713.083: 2.8327% ( 6) 00:07:42.929 7713.083 - 7763.495: 2.8665% ( 3) 00:07:42.929 7763.495 - 7813.908: 2.9227% ( 5) 00:07:42.929 7813.908 - 7864.320: 2.9789% ( 5) 00:07:42.929 7864.320 - 7914.732: 2.9901% ( 1) 00:07:42.929 7914.732 - 7965.145: 3.0238% ( 3) 00:07:42.929 7965.145 - 8015.557: 3.0913% ( 6) 00:07:42.929 8015.557 - 8065.969: 3.1812% ( 8) 00:07:42.929 8065.969 - 8116.382: 3.2374% ( 5) 00:07:42.929 8116.382 - 8166.794: 3.2487% ( 1) 00:07:42.929 8166.794 - 8217.206: 3.2599% ( 1) 00:07:42.929 8318.031 - 8368.443: 3.2711% ( 1) 00:07:42.929 8368.443 - 8418.855: 3.3049% ( 3) 00:07:42.929 8418.855 - 8469.268: 3.3161% ( 1) 00:07:42.929 8469.268 - 8519.680: 3.3498% ( 3) 00:07:42.929 8519.680 - 8570.092: 3.3611% ( 1) 00:07:42.929 8570.092 - 8620.505: 3.3948% ( 3) 00:07:42.929 8620.505 - 8670.917: 3.4060% ( 1) 00:07:42.929 8670.917 - 8721.329: 3.5184% ( 10) 00:07:42.929 8721.329 - 8771.742: 3.6421% ( 11) 00:07:42.929 8771.742 - 8822.154: 3.7545% ( 10) 00:07:42.929 8822.154 - 8872.566: 3.8557% ( 9) 00:07:42.929 8872.566 - 8922.978: 3.9231% ( 6) 00:07:42.929 8922.978 - 8973.391: 3.9456% ( 2) 00:07:42.929 8973.391 - 9023.803: 3.9906% ( 4) 00:07:42.929 9023.803 - 9074.215: 4.0355% ( 4) 00:07:42.929 9074.215 - 9124.628: 4.0692% ( 3) 00:07:42.929 9124.628 - 9175.040: 4.1030% ( 3) 00:07:42.929 9225.452 - 9275.865: 4.1367% ( 3) 00:07:42.929 9275.865 - 9326.277: 4.1704% ( 3) 00:07:42.929 9326.277 - 9376.689: 4.2041% ( 3) 00:07:42.929 9376.689 - 9427.102: 4.2491% ( 4) 00:07:42.929 9427.102 - 9477.514: 4.2828% ( 3) 00:07:42.929 9477.514 - 9527.926: 4.3165% ( 3) 00:07:42.929 9679.163 - 9729.575: 4.3278% ( 1) 00:07:42.929 9729.575 - 9779.988: 4.3615% ( 3) 00:07:42.929 9779.988 - 9830.400: 4.3952% ( 3) 00:07:42.929 9830.400 - 9880.812: 4.4627% ( 6) 00:07:42.929 9880.812 - 9931.225: 4.5638% ( 9) 00:07:42.929 9931.225 - 9981.637: 4.6875% ( 11) 00:07:42.929 9981.637 - 10032.049: 4.8112% ( 11) 00:07:42.929 10032.049 - 10082.462: 4.8561% ( 4) 00:07:42.929 10082.462 - 10132.874: 4.9798% ( 11) 00:07:42.929 10132.874 - 10183.286: 5.0585% ( 7) 00:07:42.929 10183.286 - 10233.698: 5.1259% ( 6) 00:07:42.929 10233.698 - 10284.111: 5.2158% ( 8) 00:07:42.929 10284.111 - 10334.523: 5.2608% ( 4) 00:07:42.929 10334.523 - 10384.935: 5.3170% ( 5) 00:07:42.929 10384.935 - 10435.348: 5.4182% ( 9) 00:07:42.929 10435.348 - 10485.760: 5.5980% ( 16) 00:07:42.929 10485.760 - 10536.172: 5.6992% ( 9) 00:07:42.929 10536.172 - 10586.585: 5.8116% ( 10) 00:07:42.929 10586.585 - 10636.997: 5.8453% ( 3) 00:07:42.929 10636.997 - 10687.409: 5.9240% ( 7) 00:07:42.929 10687.409 - 10737.822: 6.0701% ( 13) 00:07:42.929 10737.822 - 10788.234: 6.1826% ( 10) 00:07:42.929 10788.234 - 10838.646: 6.2725% ( 8) 00:07:42.929 10838.646 - 10889.058: 6.4861% ( 19) 00:07:42.929 10889.058 - 10939.471: 6.7446% ( 23) 00:07:42.929 10939.471 - 10989.883: 7.0256% ( 25) 00:07:42.929 10989.883 - 11040.295: 7.2280% ( 18) 00:07:42.929 11040.295 - 11090.708: 7.6214% ( 35) 00:07:42.929 11090.708 - 11141.120: 7.8013% ( 16) 00:07:42.929 11141.120 - 11191.532: 7.9137% ( 10) 00:07:42.929 11191.532 - 11241.945: 8.0486% ( 12) 00:07:42.929 11241.945 - 11292.357: 8.4195% ( 33) 00:07:42.929 11292.357 - 11342.769: 8.6781% ( 23) 00:07:42.929 11342.769 - 11393.182: 8.9366% ( 23) 00:07:42.929 11393.182 - 11443.594: 9.2064% ( 24) 00:07:42.929 11443.594 - 11494.006: 9.5324% ( 29) 00:07:42.929 11494.006 - 11544.418: 9.8359% ( 27) 00:07:42.929 11544.418 - 11594.831: 10.1394% ( 27) 00:07:42.929 11594.831 - 11645.243: 10.5328% ( 35) 00:07:42.929 11645.243 - 11695.655: 10.9712% ( 39) 00:07:42.929 11695.655 - 11746.068: 11.3197% ( 31) 00:07:42.929 11746.068 - 11796.480: 11.6682% ( 31) 00:07:42.929 11796.480 - 11846.892: 12.4101% ( 66) 00:07:42.929 11846.892 - 11897.305: 12.8372% ( 38) 00:07:42.929 11897.305 - 11947.717: 13.4892% ( 58) 00:07:42.929 11947.717 - 11998.129: 14.0400% ( 49) 00:07:42.929 11998.129 - 12048.542: 14.6246% ( 52) 00:07:42.929 12048.542 - 12098.954: 15.4002% ( 69) 00:07:42.929 12098.954 - 12149.366: 16.2433% ( 75) 00:07:42.929 12149.366 - 12199.778: 17.0638% ( 73) 00:07:42.929 12199.778 - 12250.191: 18.2442% ( 105) 00:07:42.929 12250.191 - 12300.603: 19.2109% ( 86) 00:07:42.929 12300.603 - 12351.015: 20.0090% ( 71) 00:07:42.929 12351.015 - 12401.428: 20.6947% ( 61) 00:07:42.929 12401.428 - 12451.840: 21.6951% ( 89) 00:07:42.929 12451.840 - 12502.252: 22.6394% ( 84) 00:07:42.929 12502.252 - 12552.665: 23.7522% ( 99) 00:07:42.929 12552.665 - 12603.077: 24.4829% ( 65) 00:07:42.929 12603.077 - 12653.489: 25.2810% ( 71) 00:07:42.929 12653.489 - 12703.902: 26.0904% ( 72) 00:07:42.929 12703.902 - 12754.314: 26.7199% ( 56) 00:07:42.929 12754.314 - 12804.726: 27.5067% ( 70) 00:07:42.929 12804.726 - 12855.138: 28.2374% ( 65) 00:07:42.929 12855.138 - 12905.551: 29.1929% ( 85) 00:07:42.929 12905.551 - 13006.375: 31.3624% ( 193) 00:07:42.929 13006.375 - 13107.200: 33.5094% ( 191) 00:07:42.929 13107.200 - 13208.025: 35.1506% ( 146) 00:07:42.929 13208.025 - 13308.849: 36.8368% ( 150) 00:07:42.929 13308.849 - 13409.674: 38.5454% ( 152) 00:07:42.929 13409.674 - 13510.498: 39.9955% ( 129) 00:07:42.929 13510.498 - 13611.323: 41.1084% ( 99) 00:07:42.929 13611.323 - 13712.148: 43.0193% ( 170) 00:07:42.929 13712.148 - 13812.972: 44.9078% ( 168) 00:07:42.929 13812.972 - 13913.797: 46.7401% ( 163) 00:07:42.929 13913.797 - 14014.622: 48.3026% ( 139) 00:07:42.930 14014.622 - 14115.446: 49.5616% ( 112) 00:07:42.930 14115.446 - 14216.271: 51.2702% ( 152) 00:07:42.930 14216.271 - 14317.095: 52.4955% ( 109) 00:07:42.930 14317.095 - 14417.920: 53.8557% ( 121) 00:07:42.930 14417.920 - 14518.745: 55.3844% ( 136) 00:07:42.930 14518.745 - 14619.569: 57.2392% ( 165) 00:07:42.930 14619.569 - 14720.394: 58.8242% ( 141) 00:07:42.930 14720.394 - 14821.218: 60.4991% ( 149) 00:07:42.930 14821.218 - 14922.043: 61.8143% ( 117) 00:07:42.930 14922.043 - 15022.868: 63.0058% ( 106) 00:07:42.930 15022.868 - 15123.692: 64.7819% ( 158) 00:07:42.930 15123.692 - 15224.517: 66.3669% ( 141) 00:07:42.930 15224.517 - 15325.342: 68.0081% ( 146) 00:07:42.930 15325.342 - 15426.166: 69.1322% ( 100) 00:07:42.930 15426.166 - 15526.991: 70.2001% ( 95) 00:07:42.930 15526.991 - 15627.815: 71.4141% ( 108) 00:07:42.930 15627.815 - 15728.640: 72.5832% ( 104) 00:07:42.930 15728.640 - 15829.465: 73.7073% ( 100) 00:07:42.930 15829.465 - 15930.289: 74.7190% ( 90) 00:07:42.930 15930.289 - 16031.114: 75.7531% ( 92) 00:07:42.930 16031.114 - 16131.938: 76.7986% ( 93) 00:07:42.930 16131.938 - 16232.763: 77.5292% ( 65) 00:07:42.930 16232.763 - 16333.588: 78.5184% ( 88) 00:07:42.930 16333.588 - 16434.412: 79.4065% ( 79) 00:07:42.930 16434.412 - 16535.237: 80.0247% ( 55) 00:07:42.930 16535.237 - 16636.062: 81.1039% ( 96) 00:07:42.930 16636.062 - 16736.886: 82.1268% ( 91) 00:07:42.930 16736.886 - 16837.711: 83.2172% ( 97) 00:07:42.930 16837.711 - 16938.535: 83.9478% ( 65) 00:07:42.930 16938.535 - 17039.360: 84.6785% ( 65) 00:07:42.930 17039.360 - 17140.185: 85.3979% ( 64) 00:07:42.930 17140.185 - 17241.009: 86.4658% ( 95) 00:07:42.930 17241.009 - 17341.834: 87.3314% ( 77) 00:07:42.930 17341.834 - 17442.658: 88.1070% ( 69) 00:07:42.930 17442.658 - 17543.483: 88.8826% ( 69) 00:07:42.930 17543.483 - 17644.308: 89.5683% ( 61) 00:07:42.930 17644.308 - 17745.132: 90.2091% ( 57) 00:07:42.930 17745.132 - 17845.957: 90.9510% ( 66) 00:07:42.930 17845.957 - 17946.782: 91.9514% ( 89) 00:07:42.930 17946.782 - 18047.606: 92.6034% ( 58) 00:07:42.930 18047.606 - 18148.431: 93.0531% ( 40) 00:07:42.930 18148.431 - 18249.255: 93.4690% ( 37) 00:07:42.930 18249.255 - 18350.080: 93.7950% ( 29) 00:07:42.930 18350.080 - 18450.905: 94.1884% ( 35) 00:07:42.930 18450.905 - 18551.729: 94.5369% ( 31) 00:07:42.930 18551.729 - 18652.554: 94.9865% ( 40) 00:07:42.930 18652.554 - 18753.378: 95.2788% ( 26) 00:07:42.930 18753.378 - 18854.203: 95.6048% ( 29) 00:07:42.930 18854.203 - 18955.028: 95.8746% ( 24) 00:07:42.930 18955.028 - 19055.852: 96.0432% ( 15) 00:07:42.930 19055.852 - 19156.677: 96.1443% ( 9) 00:07:42.930 19156.677 - 19257.502: 96.3129% ( 15) 00:07:42.930 19257.502 - 19358.326: 96.5490% ( 21) 00:07:42.930 19358.326 - 19459.151: 96.7738% ( 20) 00:07:42.930 19459.151 - 19559.975: 97.0436% ( 24) 00:07:42.930 19559.975 - 19660.800: 97.2010% ( 14) 00:07:42.930 19660.800 - 19761.625: 97.3808% ( 16) 00:07:42.930 19761.625 - 19862.449: 97.6169% ( 21) 00:07:42.930 19862.449 - 19963.274: 97.7406% ( 11) 00:07:42.930 19963.274 - 20064.098: 97.8305% ( 8) 00:07:42.930 20064.098 - 20164.923: 97.9429% ( 10) 00:07:42.930 20164.923 - 20265.748: 98.0441% ( 9) 00:07:42.930 20265.748 - 20366.572: 98.2014% ( 14) 00:07:42.930 20366.572 - 20467.397: 98.2801% ( 7) 00:07:42.930 20467.397 - 20568.222: 98.3701% ( 8) 00:07:42.930 20568.222 - 20669.046: 98.4487% ( 7) 00:07:42.930 20669.046 - 20769.871: 98.5499% ( 9) 00:07:42.930 20769.871 - 20870.695: 98.5612% ( 1) 00:07:42.930 27020.997 - 27222.646: 98.5949% ( 3) 00:07:42.930 27222.646 - 27424.295: 98.7073% ( 10) 00:07:42.930 27424.295 - 27625.945: 98.7860% ( 7) 00:07:42.930 27625.945 - 27827.594: 98.8759% ( 8) 00:07:42.930 27827.594 - 28029.243: 98.9546% ( 7) 00:07:42.930 28029.243 - 28230.892: 99.0333% ( 7) 00:07:42.930 28230.892 - 28432.542: 99.1232% ( 8) 00:07:42.930 28432.542 - 28634.191: 99.2244% ( 9) 00:07:42.930 28634.191 - 28835.840: 99.2693% ( 4) 00:07:42.930 28835.840 - 29037.489: 99.2806% ( 1) 00:07:42.930 35288.615 - 35490.265: 99.3593% ( 7) 00:07:42.930 35490.265 - 35691.914: 99.4042% ( 4) 00:07:42.930 35691.914 - 35893.563: 99.4604% ( 5) 00:07:42.930 35893.563 - 36095.212: 99.5166% ( 5) 00:07:42.930 36095.212 - 36296.862: 99.5728% ( 5) 00:07:42.930 36296.862 - 36498.511: 99.6515% ( 7) 00:07:42.930 36498.511 - 36700.160: 99.7077% ( 5) 00:07:42.930 36700.160 - 36901.809: 99.7639% ( 5) 00:07:42.930 36901.809 - 37103.458: 99.8314% ( 6) 00:07:42.930 37103.458 - 37305.108: 99.8876% ( 5) 00:07:42.930 37305.108 - 37506.757: 99.9550% ( 6) 00:07:42.930 37506.757 - 37708.406: 100.0000% ( 4) 00:07:42.930 00:07:42.930 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.930 ============================================================================== 00:07:42.930 Range in us Cumulative IO count 00:07:42.930 6604.012 - 6654.425: 0.0112% ( 1) 00:07:42.930 6654.425 - 6704.837: 0.0225% ( 1) 00:07:42.930 6704.837 - 6755.249: 0.0899% ( 6) 00:07:42.930 6755.249 - 6805.662: 0.1461% ( 5) 00:07:42.930 6805.662 - 6856.074: 0.2810% ( 12) 00:07:42.930 6856.074 - 6906.486: 0.4834% ( 18) 00:07:42.930 6906.486 - 6956.898: 0.8656% ( 34) 00:07:42.930 6956.898 - 7007.311: 1.4276% ( 50) 00:07:42.930 7007.311 - 7057.723: 1.6412% ( 19) 00:07:42.930 7057.723 - 7108.135: 1.8098% ( 15) 00:07:42.930 7108.135 - 7158.548: 1.9559% ( 13) 00:07:42.930 7158.548 - 7208.960: 2.0796% ( 11) 00:07:42.930 7208.960 - 7259.372: 2.1246% ( 4) 00:07:42.930 7259.372 - 7309.785: 2.1583% ( 3) 00:07:42.930 7410.609 - 7461.022: 2.1695% ( 1) 00:07:42.930 7461.022 - 7511.434: 2.2594% ( 8) 00:07:42.930 7511.434 - 7561.846: 2.4056% ( 13) 00:07:42.930 7561.846 - 7612.258: 2.6304% ( 20) 00:07:42.930 7612.258 - 7662.671: 2.7540% ( 11) 00:07:42.930 7662.671 - 7713.083: 3.1362% ( 34) 00:07:42.930 7713.083 - 7763.495: 3.2262% ( 8) 00:07:42.930 7763.495 - 7813.908: 3.2824% ( 5) 00:07:42.930 7813.908 - 7864.320: 3.3386% ( 5) 00:07:42.930 7864.320 - 7914.732: 3.4173% ( 7) 00:07:42.930 7914.732 - 7965.145: 3.4510% ( 3) 00:07:42.930 7965.145 - 8015.557: 3.4847% ( 3) 00:07:42.930 8015.557 - 8065.969: 3.5072% ( 2) 00:07:42.930 8065.969 - 8116.382: 3.5184% ( 1) 00:07:42.930 8116.382 - 8166.794: 3.5522% ( 3) 00:07:42.930 8166.794 - 8217.206: 3.5746% ( 2) 00:07:42.930 8217.206 - 8267.618: 3.5971% ( 2) 00:07:42.930 9275.865 - 9326.277: 3.6196% ( 2) 00:07:42.930 9326.277 - 9376.689: 3.6646% ( 4) 00:07:42.930 9376.689 - 9427.102: 3.7208% ( 5) 00:07:42.930 9427.102 - 9477.514: 3.7770% ( 5) 00:07:42.930 9477.514 - 9527.926: 3.8444% ( 6) 00:07:42.930 9527.926 - 9578.338: 3.9231% ( 7) 00:07:42.930 9578.338 - 9628.751: 4.0130% ( 8) 00:07:42.930 9628.751 - 9679.163: 4.0468% ( 3) 00:07:42.930 9679.163 - 9729.575: 4.0805% ( 3) 00:07:42.930 9729.575 - 9779.988: 4.1142% ( 3) 00:07:42.930 9779.988 - 9830.400: 4.1479% ( 3) 00:07:42.930 9830.400 - 9880.812: 4.1929% ( 4) 00:07:42.930 9880.812 - 9931.225: 4.2266% ( 3) 00:07:42.930 9931.225 - 9981.637: 4.2716% ( 4) 00:07:42.930 9981.637 - 10032.049: 4.3165% ( 4) 00:07:42.930 10032.049 - 10082.462: 4.3728% ( 5) 00:07:42.930 10082.462 - 10132.874: 4.4402% ( 6) 00:07:42.930 10132.874 - 10183.286: 4.4852% ( 4) 00:07:42.930 10183.286 - 10233.698: 4.5189% ( 3) 00:07:42.930 10233.698 - 10284.111: 4.5638% ( 4) 00:07:42.930 10284.111 - 10334.523: 4.7212% ( 14) 00:07:42.930 10334.523 - 10384.935: 4.9123% ( 17) 00:07:42.930 10384.935 - 10435.348: 4.9798% ( 6) 00:07:42.930 10435.348 - 10485.760: 5.1484% ( 15) 00:07:42.930 10485.760 - 10536.172: 5.2945% ( 13) 00:07:42.930 10536.172 - 10586.585: 5.4631% ( 15) 00:07:42.930 10586.585 - 10636.997: 5.5306% ( 6) 00:07:42.930 10636.997 - 10687.409: 5.5980% ( 6) 00:07:42.930 10687.409 - 10737.822: 5.7104% ( 10) 00:07:42.930 10737.822 - 10788.234: 5.8453% ( 12) 00:07:42.930 10788.234 - 10838.646: 6.0364% ( 17) 00:07:42.930 10838.646 - 10889.058: 6.2950% ( 23) 00:07:42.930 10889.058 - 10939.471: 6.5985% ( 27) 00:07:42.930 10939.471 - 10989.883: 6.8121% ( 19) 00:07:42.930 10989.883 - 11040.295: 7.0481% ( 21) 00:07:42.930 11040.295 - 11090.708: 7.3741% ( 29) 00:07:42.930 11090.708 - 11141.120: 7.6439% ( 24) 00:07:42.930 11141.120 - 11191.532: 7.8912% ( 22) 00:07:42.930 11191.532 - 11241.945: 8.1610% ( 24) 00:07:42.930 11241.945 - 11292.357: 8.4195% ( 23) 00:07:42.930 11292.357 - 11342.769: 8.7567% ( 30) 00:07:42.930 11342.769 - 11393.182: 9.1951% ( 39) 00:07:42.930 11393.182 - 11443.594: 9.6335% ( 39) 00:07:42.930 11443.594 - 11494.006: 10.0270% ( 35) 00:07:42.930 11494.006 - 11544.418: 10.3305% ( 27) 00:07:42.930 11544.418 - 11594.831: 10.8925% ( 50) 00:07:42.930 11594.831 - 11645.243: 11.3647% ( 42) 00:07:42.930 11645.243 - 11695.655: 11.7019% ( 30) 00:07:42.930 11695.655 - 11746.068: 12.4213% ( 64) 00:07:42.930 11746.068 - 11796.480: 12.8710% ( 40) 00:07:42.930 11796.480 - 11846.892: 13.4105% ( 48) 00:07:42.930 11846.892 - 11897.305: 13.7927% ( 34) 00:07:42.930 11897.305 - 11947.717: 14.3548% ( 50) 00:07:42.930 11947.717 - 11998.129: 14.9505% ( 53) 00:07:42.930 11998.129 - 12048.542: 15.6700% ( 64) 00:07:42.930 12048.542 - 12098.954: 16.3107% ( 57) 00:07:42.930 12098.954 - 12149.366: 16.9402% ( 56) 00:07:42.930 12149.366 - 12199.778: 17.5472% ( 54) 00:07:42.930 12199.778 - 12250.191: 18.3790% ( 74) 00:07:42.930 12250.191 - 12300.603: 19.2558% ( 78) 00:07:42.930 12300.603 - 12351.015: 19.9978% ( 66) 00:07:42.930 12351.015 - 12401.428: 20.7059% ( 63) 00:07:42.930 12401.428 - 12451.840: 21.3692% ( 59) 00:07:42.931 12451.840 - 12502.252: 22.0549% ( 61) 00:07:42.931 12502.252 - 12552.665: 22.6169% ( 50) 00:07:42.931 12552.665 - 12603.077: 23.3026% ( 61) 00:07:42.931 12603.077 - 12653.489: 23.9433% ( 57) 00:07:42.931 12653.489 - 12703.902: 24.6853% ( 66) 00:07:42.931 12703.902 - 12754.314: 25.5396% ( 76) 00:07:42.931 12754.314 - 12804.726: 26.4726% ( 83) 00:07:42.931 12804.726 - 12855.138: 27.4955% ( 91) 00:07:42.931 12855.138 - 12905.551: 28.8107% ( 117) 00:07:42.931 12905.551 - 13006.375: 31.3174% ( 223) 00:07:42.931 13006.375 - 13107.200: 33.8916% ( 229) 00:07:42.931 13107.200 - 13208.025: 36.4883% ( 231) 00:07:42.931 13208.025 - 13308.849: 38.5117% ( 180) 00:07:42.931 13308.849 - 13409.674: 40.2428% ( 154) 00:07:42.931 13409.674 - 13510.498: 41.8615% ( 144) 00:07:42.931 13510.498 - 13611.323: 43.2554% ( 124) 00:07:42.931 13611.323 - 13712.148: 44.7954% ( 137) 00:07:42.931 13712.148 - 13812.972: 46.5940% ( 160) 00:07:42.931 13812.972 - 13913.797: 48.1340% ( 137) 00:07:42.931 13913.797 - 14014.622: 49.6515% ( 135) 00:07:42.931 14014.622 - 14115.446: 51.0004% ( 120) 00:07:42.931 14115.446 - 14216.271: 52.3831% ( 123) 00:07:42.931 14216.271 - 14317.095: 53.4173% ( 92) 00:07:42.931 14317.095 - 14417.920: 54.3952% ( 87) 00:07:42.931 14417.920 - 14518.745: 55.4294% ( 92) 00:07:42.931 14518.745 - 14619.569: 56.5085% ( 96) 00:07:42.931 14619.569 - 14720.394: 57.6214% ( 99) 00:07:42.931 14720.394 - 14821.218: 58.9254% ( 116) 00:07:42.931 14821.218 - 14922.043: 60.5216% ( 142) 00:07:42.931 14922.043 - 15022.868: 62.1515% ( 145) 00:07:42.931 15022.868 - 15123.692: 63.8152% ( 148) 00:07:42.931 15123.692 - 15224.517: 65.1079% ( 115) 00:07:42.931 15224.517 - 15325.342: 66.4793% ( 122) 00:07:42.931 15325.342 - 15426.166: 67.6821% ( 107) 00:07:42.931 15426.166 - 15526.991: 68.9861% ( 116) 00:07:42.931 15526.991 - 15627.815: 70.1776% ( 106) 00:07:42.931 15627.815 - 15728.640: 71.5715% ( 124) 00:07:42.931 15728.640 - 15829.465: 73.0103% ( 128) 00:07:42.931 15829.465 - 15930.289: 74.2019% ( 106) 00:07:42.931 15930.289 - 16031.114: 75.2361% ( 92) 00:07:42.931 16031.114 - 16131.938: 76.1129% ( 78) 00:07:42.931 16131.938 - 16232.763: 76.8772% ( 68) 00:07:42.931 16232.763 - 16333.588: 77.8665% ( 88) 00:07:42.931 16333.588 - 16434.412: 79.0355% ( 104) 00:07:42.931 16434.412 - 16535.237: 80.1484% ( 99) 00:07:42.931 16535.237 - 16636.062: 81.4861% ( 119) 00:07:42.931 16636.062 - 16736.886: 82.8799% ( 124) 00:07:42.931 16736.886 - 16837.711: 83.7905% ( 81) 00:07:42.931 16837.711 - 16938.535: 84.7235% ( 83) 00:07:42.931 16938.535 - 17039.360: 85.7914% ( 95) 00:07:42.931 17039.360 - 17140.185: 86.7693% ( 87) 00:07:42.931 17140.185 - 17241.009: 87.5674% ( 71) 00:07:42.931 17241.009 - 17341.834: 88.3543% ( 70) 00:07:42.931 17341.834 - 17442.658: 89.1637% ( 72) 00:07:42.931 17442.658 - 17543.483: 89.8156% ( 58) 00:07:42.931 17543.483 - 17644.308: 90.5238% ( 63) 00:07:42.931 17644.308 - 17745.132: 91.3332% ( 72) 00:07:42.931 17745.132 - 17845.957: 91.8503% ( 46) 00:07:42.931 17845.957 - 17946.782: 92.2549% ( 36) 00:07:42.931 17946.782 - 18047.606: 92.5809% ( 29) 00:07:42.931 18047.606 - 18148.431: 92.8732% ( 26) 00:07:42.931 18148.431 - 18249.255: 93.1317% ( 23) 00:07:42.931 18249.255 - 18350.080: 93.4128% ( 25) 00:07:42.931 18350.080 - 18450.905: 93.7950% ( 34) 00:07:42.931 18450.905 - 18551.729: 94.1659% ( 33) 00:07:42.931 18551.729 - 18652.554: 94.3570% ( 17) 00:07:42.931 18652.554 - 18753.378: 94.5481% ( 17) 00:07:42.931 18753.378 - 18854.203: 94.8404% ( 26) 00:07:42.931 18854.203 - 18955.028: 95.1776% ( 30) 00:07:42.931 18955.028 - 19055.852: 95.6048% ( 38) 00:07:42.931 19055.852 - 19156.677: 95.9982% ( 35) 00:07:42.931 19156.677 - 19257.502: 96.3804% ( 34) 00:07:42.931 19257.502 - 19358.326: 96.6277% ( 22) 00:07:42.931 19358.326 - 19459.151: 96.8413% ( 19) 00:07:42.931 19459.151 - 19559.975: 96.9649% ( 11) 00:07:42.931 19559.975 - 19660.800: 97.1448% ( 16) 00:07:42.931 19660.800 - 19761.625: 97.3134% ( 15) 00:07:42.931 19761.625 - 19862.449: 97.4595% ( 13) 00:07:42.931 19862.449 - 19963.274: 97.6281% ( 15) 00:07:42.931 19963.274 - 20064.098: 97.7855% ( 14) 00:07:42.931 20064.098 - 20164.923: 97.9541% ( 15) 00:07:42.931 20164.923 - 20265.748: 98.1003% ( 13) 00:07:42.931 20265.748 - 20366.572: 98.2352% ( 12) 00:07:42.931 20366.572 - 20467.397: 98.3476% ( 10) 00:07:42.931 20467.397 - 20568.222: 98.4150% ( 6) 00:07:42.931 20568.222 - 20669.046: 98.4712% ( 5) 00:07:42.931 21173.169 - 21273.994: 98.4825% ( 1) 00:07:42.931 21273.994 - 21374.818: 98.5387% ( 5) 00:07:42.931 21374.818 - 21475.643: 98.5612% ( 2) 00:07:42.931 26214.400 - 26416.049: 98.6174% ( 5) 00:07:42.931 26416.049 - 26617.698: 98.6848% ( 6) 00:07:42.931 26617.698 - 26819.348: 98.7522% ( 6) 00:07:42.931 26819.348 - 27020.997: 98.8197% ( 6) 00:07:42.931 27020.997 - 27222.646: 98.8871% ( 6) 00:07:42.931 27222.646 - 27424.295: 98.9658% ( 7) 00:07:42.931 27424.295 - 27625.945: 99.0333% ( 6) 00:07:42.931 27625.945 - 27827.594: 99.1120% ( 7) 00:07:42.931 27827.594 - 28029.243: 99.1682% ( 5) 00:07:42.931 28029.243 - 28230.892: 99.2469% ( 7) 00:07:42.931 28230.892 - 28432.542: 99.2806% ( 3) 00:07:42.931 34078.720 - 34280.369: 99.3031% ( 2) 00:07:42.931 34280.369 - 34482.018: 99.3705% ( 6) 00:07:42.931 34482.018 - 34683.668: 99.4379% ( 6) 00:07:42.931 34683.668 - 34885.317: 99.5054% ( 6) 00:07:42.931 34885.317 - 35086.966: 99.5841% ( 7) 00:07:42.931 35086.966 - 35288.615: 99.6515% ( 6) 00:07:42.931 35288.615 - 35490.265: 99.7190% ( 6) 00:07:42.931 35490.265 - 35691.914: 99.7864% ( 6) 00:07:42.931 35691.914 - 35893.563: 99.8539% ( 6) 00:07:42.931 35893.563 - 36095.212: 99.9326% ( 7) 00:07:42.931 36095.212 - 36296.862: 100.0000% ( 6) 00:07:42.931 00:07:42.931 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.931 ============================================================================== 00:07:42.931 Range in us Cumulative IO count 00:07:42.931 6452.775 - 6503.188: 0.0112% ( 1) 00:07:42.931 6654.425 - 6704.837: 0.0225% ( 1) 00:07:42.931 6704.837 - 6755.249: 0.0337% ( 1) 00:07:42.931 6755.249 - 6805.662: 0.1349% ( 9) 00:07:42.931 6805.662 - 6856.074: 0.3260% ( 17) 00:07:42.931 6856.074 - 6906.486: 0.7307% ( 36) 00:07:42.931 6906.486 - 6956.898: 1.2365% ( 45) 00:07:42.931 6956.898 - 7007.311: 1.6412% ( 36) 00:07:42.931 7007.311 - 7057.723: 2.1021% ( 41) 00:07:42.931 7057.723 - 7108.135: 2.3156% ( 19) 00:07:42.931 7108.135 - 7158.548: 2.5180% ( 18) 00:07:42.931 7158.548 - 7208.960: 2.6754% ( 14) 00:07:42.931 7208.960 - 7259.372: 2.7428% ( 6) 00:07:42.931 7259.372 - 7309.785: 2.7653% ( 2) 00:07:42.931 7309.785 - 7360.197: 2.7878% ( 2) 00:07:42.931 7360.197 - 7410.609: 2.8103% ( 2) 00:07:42.931 7410.609 - 7461.022: 2.8327% ( 2) 00:07:42.931 7461.022 - 7511.434: 2.8665% ( 3) 00:07:42.931 7511.434 - 7561.846: 2.8777% ( 1) 00:07:42.931 7713.083 - 7763.495: 2.8889% ( 1) 00:07:42.931 7965.145 - 8015.557: 2.9227% ( 3) 00:07:42.931 8015.557 - 8065.969: 2.9789% ( 5) 00:07:42.931 8065.969 - 8116.382: 3.0238% ( 4) 00:07:42.931 8116.382 - 8166.794: 3.3049% ( 25) 00:07:42.931 8166.794 - 8217.206: 3.4735% ( 15) 00:07:42.931 8217.206 - 8267.618: 3.5072% ( 3) 00:07:42.931 8267.618 - 8318.031: 3.5409% ( 3) 00:07:42.931 8318.031 - 8368.443: 3.5634% ( 2) 00:07:42.931 8368.443 - 8418.855: 3.5971% ( 3) 00:07:42.931 9779.988 - 9830.400: 3.6196% ( 2) 00:07:42.931 9981.637 - 10032.049: 3.6308% ( 1) 00:07:42.931 10032.049 - 10082.462: 3.6646% ( 3) 00:07:42.931 10082.462 - 10132.874: 3.7320% ( 6) 00:07:42.931 10132.874 - 10183.286: 3.7657% ( 3) 00:07:42.931 10183.286 - 10233.698: 3.7995% ( 3) 00:07:42.931 10233.698 - 10284.111: 3.8669% ( 6) 00:07:42.931 10284.111 - 10334.523: 4.0468% ( 16) 00:07:42.931 10334.523 - 10384.935: 4.1479% ( 9) 00:07:42.931 10384.935 - 10435.348: 4.2379% ( 8) 00:07:42.931 10435.348 - 10485.760: 4.3278% ( 8) 00:07:42.931 10485.760 - 10536.172: 4.4852% ( 14) 00:07:42.931 10536.172 - 10586.585: 4.6875% ( 18) 00:07:42.931 10586.585 - 10636.997: 5.1596% ( 42) 00:07:42.931 10636.997 - 10687.409: 5.6317% ( 42) 00:07:42.931 10687.409 - 10737.822: 6.0027% ( 33) 00:07:42.931 10737.822 - 10788.234: 6.4074% ( 36) 00:07:42.931 10788.234 - 10838.646: 6.8795% ( 42) 00:07:42.931 10838.646 - 10889.058: 7.1830% ( 27) 00:07:42.931 10889.058 - 10939.471: 7.4303% ( 22) 00:07:42.931 10939.471 - 10989.883: 7.7226% ( 26) 00:07:42.931 10989.883 - 11040.295: 8.1722% ( 40) 00:07:42.931 11040.295 - 11090.708: 8.4532% ( 25) 00:07:42.931 11090.708 - 11141.120: 8.6556% ( 18) 00:07:42.931 11141.120 - 11191.532: 8.8804% ( 20) 00:07:42.931 11191.532 - 11241.945: 9.1052% ( 20) 00:07:42.931 11241.945 - 11292.357: 9.3188% ( 19) 00:07:42.931 11292.357 - 11342.769: 9.5661% ( 22) 00:07:42.931 11342.769 - 11393.182: 9.8134% ( 22) 00:07:42.931 11393.182 - 11443.594: 10.2068% ( 35) 00:07:42.931 11443.594 - 11494.006: 10.5890% ( 34) 00:07:42.931 11494.006 - 11544.418: 11.0049% ( 37) 00:07:42.931 11544.418 - 11594.831: 11.2972% ( 26) 00:07:42.931 11594.831 - 11645.243: 11.6344% ( 30) 00:07:42.931 11645.243 - 11695.655: 12.1178% ( 43) 00:07:42.931 11695.655 - 11746.068: 12.7473% ( 56) 00:07:42.931 11746.068 - 11796.480: 13.4780% ( 65) 00:07:42.931 11796.480 - 11846.892: 13.9613% ( 43) 00:07:42.931 11846.892 - 11897.305: 14.4559% ( 44) 00:07:42.931 11897.305 - 11947.717: 15.1754% ( 64) 00:07:42.932 11947.717 - 11998.129: 16.0409% ( 77) 00:07:42.932 11998.129 - 12048.542: 16.8615% ( 73) 00:07:42.932 12048.542 - 12098.954: 17.7046% ( 75) 00:07:42.932 12098.954 - 12149.366: 18.3790% ( 60) 00:07:42.932 12149.366 - 12199.778: 19.1996% ( 73) 00:07:42.932 12199.778 - 12250.191: 20.2788% ( 96) 00:07:42.932 12250.191 - 12300.603: 21.2230% ( 84) 00:07:42.932 12300.603 - 12351.015: 21.9312% ( 63) 00:07:42.932 12351.015 - 12401.428: 22.5944% ( 59) 00:07:42.932 12401.428 - 12451.840: 23.4263% ( 74) 00:07:42.932 12451.840 - 12502.252: 24.0782% ( 58) 00:07:42.932 12502.252 - 12552.665: 25.0225% ( 84) 00:07:42.932 12552.665 - 12603.077: 25.7531% ( 65) 00:07:42.932 12603.077 - 12653.489: 26.3939% ( 57) 00:07:42.932 12653.489 - 12703.902: 26.9897% ( 53) 00:07:42.932 12703.902 - 12754.314: 27.5629% ( 51) 00:07:42.932 12754.314 - 12804.726: 28.3948% ( 74) 00:07:42.932 12804.726 - 12855.138: 29.0805% ( 61) 00:07:42.932 12855.138 - 12905.551: 30.0135% ( 83) 00:07:42.932 12905.551 - 13006.375: 31.4861% ( 131) 00:07:42.932 13006.375 - 13107.200: 33.0598% ( 140) 00:07:42.932 13107.200 - 13208.025: 34.7572% ( 151) 00:07:42.932 13208.025 - 13308.849: 36.4096% ( 147) 00:07:42.932 13308.849 - 13409.674: 37.9047% ( 133) 00:07:42.932 13409.674 - 13510.498: 39.6920% ( 159) 00:07:42.932 13510.498 - 13611.323: 41.4119% ( 153) 00:07:42.932 13611.323 - 13712.148: 43.0418% ( 145) 00:07:42.932 13712.148 - 13812.972: 44.5369% ( 133) 00:07:42.932 13812.972 - 13913.797: 46.6165% ( 185) 00:07:42.932 13913.797 - 14014.622: 48.7298% ( 188) 00:07:42.932 14014.622 - 14115.446: 50.4721% ( 155) 00:07:42.932 14115.446 - 14216.271: 52.3044% ( 163) 00:07:42.932 14216.271 - 14317.095: 54.0130% ( 152) 00:07:42.932 14317.095 - 14417.920: 55.6767% ( 148) 00:07:42.932 14417.920 - 14518.745: 57.3741% ( 151) 00:07:42.932 14518.745 - 14619.569: 58.8017% ( 127) 00:07:42.932 14619.569 - 14720.394: 60.3867% ( 141) 00:07:42.932 14720.394 - 14821.218: 61.7244% ( 119) 00:07:42.932 14821.218 - 14922.043: 63.1407% ( 126) 00:07:42.932 14922.043 - 15022.868: 64.5796% ( 128) 00:07:42.932 15022.868 - 15123.692: 65.7149% ( 101) 00:07:42.932 15123.692 - 15224.517: 66.5130% ( 71) 00:07:42.932 15224.517 - 15325.342: 67.3449% ( 74) 00:07:42.932 15325.342 - 15426.166: 68.2104% ( 77) 00:07:42.932 15426.166 - 15526.991: 68.9523% ( 66) 00:07:42.932 15526.991 - 15627.815: 69.7617% ( 72) 00:07:42.932 15627.815 - 15728.640: 70.6160% ( 76) 00:07:42.932 15728.640 - 15829.465: 71.5153% ( 80) 00:07:42.932 15829.465 - 15930.289: 72.4708% ( 85) 00:07:42.932 15930.289 - 16031.114: 73.7298% ( 112) 00:07:42.932 16031.114 - 16131.938: 74.9213% ( 106) 00:07:42.932 16131.938 - 16232.763: 76.3377% ( 126) 00:07:42.932 16232.763 - 16333.588: 77.8552% ( 135) 00:07:42.932 16333.588 - 16434.412: 79.3053% ( 129) 00:07:42.932 16434.412 - 16535.237: 80.8116% ( 134) 00:07:42.932 16535.237 - 16636.062: 82.1380% ( 118) 00:07:42.932 16636.062 - 16736.886: 83.3521% ( 108) 00:07:42.932 16736.886 - 16837.711: 84.4087% ( 94) 00:07:42.932 16837.711 - 16938.535: 85.2068% ( 71) 00:07:42.932 16938.535 - 17039.360: 86.0724% ( 77) 00:07:42.932 17039.360 - 17140.185: 86.9042% ( 74) 00:07:42.932 17140.185 - 17241.009: 87.6124% ( 63) 00:07:42.932 17241.009 - 17341.834: 88.3206% ( 63) 00:07:42.932 17341.834 - 17442.658: 88.8489% ( 47) 00:07:42.932 17442.658 - 17543.483: 89.3997% ( 49) 00:07:42.932 17543.483 - 17644.308: 89.9505% ( 49) 00:07:42.932 17644.308 - 17745.132: 90.4451% ( 44) 00:07:42.932 17745.132 - 17845.957: 91.0184% ( 51) 00:07:42.932 17845.957 - 17946.782: 91.6030% ( 52) 00:07:42.932 17946.782 - 18047.606: 92.1538% ( 49) 00:07:42.932 18047.606 - 18148.431: 92.6034% ( 40) 00:07:42.932 18148.431 - 18249.255: 92.9631% ( 32) 00:07:42.932 18249.255 - 18350.080: 93.3678% ( 36) 00:07:42.932 18350.080 - 18450.905: 93.5589% ( 17) 00:07:42.932 18450.905 - 18551.729: 93.7612% ( 18) 00:07:42.932 18551.729 - 18652.554: 93.9861% ( 20) 00:07:42.932 18652.554 - 18753.378: 94.1996% ( 19) 00:07:42.932 18753.378 - 18854.203: 94.4582% ( 23) 00:07:42.932 18854.203 - 18955.028: 94.6830% ( 20) 00:07:42.932 18955.028 - 19055.852: 94.9078% ( 20) 00:07:42.932 19055.852 - 19156.677: 95.1551% ( 22) 00:07:42.932 19156.677 - 19257.502: 95.4699% ( 28) 00:07:42.932 19257.502 - 19358.326: 95.7284% ( 23) 00:07:42.932 19358.326 - 19459.151: 96.0094% ( 25) 00:07:42.932 19459.151 - 19559.975: 96.3354% ( 29) 00:07:42.932 19559.975 - 19660.800: 96.7176% ( 34) 00:07:42.932 19660.800 - 19761.625: 97.0661% ( 31) 00:07:42.932 19761.625 - 19862.449: 97.3808% ( 28) 00:07:42.932 19862.449 - 19963.274: 97.6506% ( 24) 00:07:42.932 19963.274 - 20064.098: 97.8192% ( 15) 00:07:42.932 20064.098 - 20164.923: 97.9766% ( 14) 00:07:42.932 20164.923 - 20265.748: 98.1003% ( 11) 00:07:42.932 20265.748 - 20366.572: 98.2239% ( 11) 00:07:42.932 20366.572 - 20467.397: 98.3026% ( 7) 00:07:42.932 20467.397 - 20568.222: 98.3925% ( 8) 00:07:42.932 20568.222 - 20669.046: 98.4825% ( 8) 00:07:42.932 20669.046 - 20769.871: 98.5387% ( 5) 00:07:42.932 20769.871 - 20870.695: 98.5612% ( 2) 00:07:42.932 25508.628 - 25609.452: 98.5836% ( 2) 00:07:42.932 25609.452 - 25710.277: 98.6174% ( 3) 00:07:42.932 25710.277 - 25811.102: 98.6398% ( 2) 00:07:42.932 25811.102 - 26012.751: 98.7073% ( 6) 00:07:42.932 26012.751 - 26214.400: 98.7747% ( 6) 00:07:42.932 26214.400 - 26416.049: 98.8309% ( 5) 00:07:42.932 26416.049 - 26617.698: 98.9096% ( 7) 00:07:42.932 26617.698 - 26819.348: 98.9771% ( 6) 00:07:42.932 26819.348 - 27020.997: 99.0558% ( 7) 00:07:42.932 27020.997 - 27222.646: 99.1232% ( 6) 00:07:42.932 27222.646 - 27424.295: 99.1906% ( 6) 00:07:42.932 27424.295 - 27625.945: 99.2581% ( 6) 00:07:42.932 27625.945 - 27827.594: 99.2806% ( 2) 00:07:42.932 33473.772 - 33675.422: 99.3031% ( 2) 00:07:42.932 33675.422 - 33877.071: 99.3817% ( 7) 00:07:42.932 33877.071 - 34078.720: 99.4379% ( 5) 00:07:42.932 34078.720 - 34280.369: 99.5054% ( 6) 00:07:42.932 34280.369 - 34482.018: 99.5728% ( 6) 00:07:42.932 34482.018 - 34683.668: 99.6403% ( 6) 00:07:42.932 34683.668 - 34885.317: 99.7077% ( 6) 00:07:42.932 34885.317 - 35086.966: 99.7864% ( 7) 00:07:42.932 35086.966 - 35288.615: 99.8426% ( 5) 00:07:42.932 35288.615 - 35490.265: 99.9213% ( 7) 00:07:42.932 35490.265 - 35691.914: 99.9888% ( 6) 00:07:42.932 35691.914 - 35893.563: 100.0000% ( 1) 00:07:42.932 00:07:42.932 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.932 ============================================================================== 00:07:42.932 Range in us Cumulative IO count 00:07:42.932 6553.600 - 6604.012: 0.0112% ( 1) 00:07:42.932 6654.425 - 6704.837: 0.0225% ( 1) 00:07:42.932 6704.837 - 6755.249: 0.0674% ( 4) 00:07:42.932 6755.249 - 6805.662: 0.1461% ( 7) 00:07:42.932 6805.662 - 6856.074: 0.4496% ( 27) 00:07:42.932 6856.074 - 6906.486: 0.6857% ( 21) 00:07:42.932 6906.486 - 6956.898: 1.0117% ( 29) 00:07:42.932 6956.898 - 7007.311: 1.3602% ( 31) 00:07:42.932 7007.311 - 7057.723: 2.0009% ( 57) 00:07:42.932 7057.723 - 7108.135: 2.2932% ( 26) 00:07:42.932 7108.135 - 7158.548: 2.4955% ( 18) 00:07:42.932 7158.548 - 7208.960: 2.6641% ( 15) 00:07:42.932 7208.960 - 7259.372: 2.7540% ( 8) 00:07:42.932 7259.372 - 7309.785: 2.8215% ( 6) 00:07:42.932 7309.785 - 7360.197: 2.8665% ( 4) 00:07:42.932 7360.197 - 7410.609: 2.8777% ( 1) 00:07:42.932 7914.732 - 7965.145: 2.9114% ( 3) 00:07:42.932 7965.145 - 8015.557: 2.9789% ( 6) 00:07:42.932 8015.557 - 8065.969: 3.0351% ( 5) 00:07:42.932 8065.969 - 8116.382: 3.1250% ( 8) 00:07:42.932 8116.382 - 8166.794: 3.4173% ( 26) 00:07:42.932 8166.794 - 8217.206: 3.4510% ( 3) 00:07:42.932 8217.206 - 8267.618: 3.4847% ( 3) 00:07:42.932 8267.618 - 8318.031: 3.5184% ( 3) 00:07:42.932 8318.031 - 8368.443: 3.5634% ( 4) 00:07:42.932 8368.443 - 8418.855: 3.5971% ( 3) 00:07:42.932 10082.462 - 10132.874: 3.6196% ( 2) 00:07:42.932 10132.874 - 10183.286: 3.6421% ( 2) 00:07:42.932 10183.286 - 10233.698: 3.7433% ( 9) 00:07:42.932 10233.698 - 10284.111: 3.8444% ( 9) 00:07:42.932 10284.111 - 10334.523: 3.9344% ( 8) 00:07:42.932 10334.523 - 10384.935: 4.0580% ( 11) 00:07:42.932 10384.935 - 10435.348: 4.2603% ( 18) 00:07:42.932 10435.348 - 10485.760: 4.4739% ( 19) 00:07:42.932 10485.760 - 10536.172: 4.6763% ( 18) 00:07:42.932 10536.172 - 10586.585: 4.8336% ( 14) 00:07:42.932 10586.585 - 10636.997: 5.1933% ( 32) 00:07:42.932 10636.997 - 10687.409: 5.5306% ( 30) 00:07:42.932 10687.409 - 10737.822: 5.6992% ( 15) 00:07:42.932 10737.822 - 10788.234: 5.8341% ( 12) 00:07:42.932 10788.234 - 10838.646: 5.9353% ( 9) 00:07:42.932 10838.646 - 10889.058: 6.0589% ( 11) 00:07:42.932 10889.058 - 10939.471: 6.2050% ( 13) 00:07:42.932 10939.471 - 10989.883: 6.4074% ( 18) 00:07:42.932 10989.883 - 11040.295: 6.6547% ( 22) 00:07:42.932 11040.295 - 11090.708: 6.9132% ( 23) 00:07:42.932 11090.708 - 11141.120: 7.1942% ( 25) 00:07:42.932 11141.120 - 11191.532: 7.5315% ( 30) 00:07:42.932 11191.532 - 11241.945: 7.9362% ( 36) 00:07:42.932 11241.945 - 11292.357: 8.1272% ( 17) 00:07:42.932 11292.357 - 11342.769: 8.3183% ( 17) 00:07:42.932 11342.769 - 11393.182: 8.5319% ( 19) 00:07:42.932 11393.182 - 11443.594: 8.8354% ( 27) 00:07:42.932 11443.594 - 11494.006: 9.2851% ( 40) 00:07:42.932 11494.006 - 11544.418: 9.6897% ( 36) 00:07:42.933 11544.418 - 11594.831: 10.2068% ( 46) 00:07:42.933 11594.831 - 11645.243: 10.5890% ( 34) 00:07:42.933 11645.243 - 11695.655: 11.1174% ( 47) 00:07:42.933 11695.655 - 11746.068: 11.5558% ( 39) 00:07:42.933 11746.068 - 11796.480: 12.0953% ( 48) 00:07:42.933 11796.480 - 11846.892: 12.6574% ( 50) 00:07:42.933 11846.892 - 11897.305: 13.2531% ( 53) 00:07:42.933 11897.305 - 11947.717: 14.0625% ( 72) 00:07:42.933 11947.717 - 11998.129: 14.7482% ( 61) 00:07:42.933 11998.129 - 12048.542: 15.3327% ( 52) 00:07:42.933 12048.542 - 12098.954: 16.1308% ( 71) 00:07:42.933 12098.954 - 12149.366: 16.7379% ( 54) 00:07:42.933 12149.366 - 12199.778: 17.4460% ( 63) 00:07:42.933 12199.778 - 12250.191: 18.4465% ( 89) 00:07:42.933 12250.191 - 12300.603: 19.3907% ( 84) 00:07:42.933 12300.603 - 12351.015: 20.4924% ( 98) 00:07:42.933 12351.015 - 12401.428: 21.6614% ( 104) 00:07:42.933 12401.428 - 12451.840: 22.8754% ( 108) 00:07:42.933 12451.840 - 12502.252: 23.9771% ( 98) 00:07:42.933 12502.252 - 12552.665: 24.9775% ( 89) 00:07:42.933 12552.665 - 12603.077: 26.1578% ( 105) 00:07:42.933 12603.077 - 12653.489: 26.9559% ( 71) 00:07:42.933 12653.489 - 12703.902: 27.8103% ( 76) 00:07:42.933 12703.902 - 12754.314: 28.7208% ( 81) 00:07:42.933 12754.314 - 12804.726: 29.7999% ( 96) 00:07:42.933 12804.726 - 12855.138: 30.9690% ( 104) 00:07:42.933 12855.138 - 12905.551: 31.7783% ( 72) 00:07:42.933 12905.551 - 13006.375: 33.7455% ( 175) 00:07:42.933 13006.375 - 13107.200: 35.4541% ( 152) 00:07:42.933 13107.200 - 13208.025: 37.0279% ( 140) 00:07:42.933 13208.025 - 13308.849: 38.5117% ( 132) 00:07:42.933 13308.849 - 13409.674: 40.2765% ( 157) 00:07:42.933 13409.674 - 13510.498: 41.8278% ( 138) 00:07:42.933 13510.498 - 13611.323: 43.3566% ( 136) 00:07:42.933 13611.323 - 13712.148: 44.8516% ( 133) 00:07:42.933 13712.148 - 13812.972: 46.2455% ( 124) 00:07:42.933 13812.972 - 13913.797: 47.7630% ( 135) 00:07:42.933 13913.797 - 14014.622: 49.1457% ( 123) 00:07:42.933 14014.622 - 14115.446: 50.4609% ( 117) 00:07:42.933 14115.446 - 14216.271: 51.8548% ( 124) 00:07:42.933 14216.271 - 14317.095: 53.2037% ( 120) 00:07:42.933 14317.095 - 14417.920: 54.1367% ( 83) 00:07:42.933 14417.920 - 14518.745: 55.2720% ( 101) 00:07:42.933 14518.745 - 14619.569: 56.5647% ( 115) 00:07:42.933 14619.569 - 14720.394: 57.9586% ( 124) 00:07:42.933 14720.394 - 14821.218: 59.3638% ( 125) 00:07:42.933 14821.218 - 14922.043: 60.6003% ( 110) 00:07:42.933 14922.043 - 15022.868: 62.1178% ( 135) 00:07:42.933 15022.868 - 15123.692: 63.6016% ( 132) 00:07:42.933 15123.692 - 15224.517: 65.1641% ( 139) 00:07:42.933 15224.517 - 15325.342: 66.4006% ( 110) 00:07:42.933 15325.342 - 15426.166: 67.6484% ( 111) 00:07:42.933 15426.166 - 15526.991: 69.0423% ( 124) 00:07:42.933 15526.991 - 15627.815: 70.3687% ( 118) 00:07:42.933 15627.815 - 15728.640: 71.4703% ( 98) 00:07:42.933 15728.640 - 15829.465: 72.5495% ( 96) 00:07:42.933 15829.465 - 15930.289: 73.7073% ( 103) 00:07:42.933 15930.289 - 16031.114: 74.7302% ( 91) 00:07:42.933 16031.114 - 16131.938: 75.9667% ( 110) 00:07:42.933 16131.938 - 16232.763: 77.3494% ( 123) 00:07:42.933 16232.763 - 16333.588: 78.7882% ( 128) 00:07:42.933 16333.588 - 16434.412: 80.5306% ( 155) 00:07:42.933 16434.412 - 16535.237: 81.9020% ( 122) 00:07:42.933 16535.237 - 16636.062: 83.1272% ( 109) 00:07:42.933 16636.062 - 16736.886: 84.4087% ( 114) 00:07:42.933 16736.886 - 16837.711: 85.4429% ( 92) 00:07:42.933 16837.711 - 16938.535: 86.4096% ( 86) 00:07:42.933 16938.535 - 17039.360: 87.3089% ( 80) 00:07:42.933 17039.360 - 17140.185: 88.0283% ( 64) 00:07:42.933 17140.185 - 17241.009: 88.5229% ( 44) 00:07:42.933 17241.009 - 17341.834: 88.9276% ( 36) 00:07:42.933 17341.834 - 17442.658: 89.2424% ( 28) 00:07:42.933 17442.658 - 17543.483: 89.6133% ( 33) 00:07:42.933 17543.483 - 17644.308: 90.0292% ( 37) 00:07:42.933 17644.308 - 17745.132: 90.3103% ( 25) 00:07:42.933 17745.132 - 17845.957: 90.5126% ( 18) 00:07:42.933 17845.957 - 17946.782: 90.7936% ( 25) 00:07:42.933 17946.782 - 18047.606: 91.1533% ( 32) 00:07:42.933 18047.606 - 18148.431: 91.4906% ( 30) 00:07:42.933 18148.431 - 18249.255: 91.9177% ( 38) 00:07:42.933 18249.255 - 18350.080: 92.3449% ( 38) 00:07:42.933 18350.080 - 18450.905: 92.8957% ( 49) 00:07:42.933 18450.905 - 18551.729: 93.3790% ( 43) 00:07:42.933 18551.729 - 18652.554: 93.7612% ( 34) 00:07:42.933 18652.554 - 18753.378: 94.1996% ( 39) 00:07:42.933 18753.378 - 18854.203: 94.6043% ( 36) 00:07:42.933 18854.203 - 18955.028: 94.9528% ( 31) 00:07:42.933 18955.028 - 19055.852: 95.1776% ( 20) 00:07:42.933 19055.852 - 19156.677: 95.5148% ( 30) 00:07:42.933 19156.677 - 19257.502: 95.9757% ( 41) 00:07:42.933 19257.502 - 19358.326: 96.2792% ( 27) 00:07:42.933 19358.326 - 19459.151: 96.6277% ( 31) 00:07:42.933 19459.151 - 19559.975: 96.9312% ( 27) 00:07:42.933 19559.975 - 19660.800: 97.2347% ( 27) 00:07:42.933 19660.800 - 19761.625: 97.5495% ( 28) 00:07:42.933 19761.625 - 19862.449: 97.7743% ( 20) 00:07:42.933 19862.449 - 19963.274: 97.9766% ( 18) 00:07:42.933 19963.274 - 20064.098: 98.1452% ( 15) 00:07:42.933 20064.098 - 20164.923: 98.2689% ( 11) 00:07:42.933 20164.923 - 20265.748: 98.3476% ( 7) 00:07:42.933 20265.748 - 20366.572: 98.4375% ( 8) 00:07:42.933 20366.572 - 20467.397: 98.5274% ( 8) 00:07:42.933 20467.397 - 20568.222: 98.5612% ( 3) 00:07:42.933 24298.732 - 24399.557: 98.6061% ( 4) 00:07:42.933 24399.557 - 24500.382: 98.6398% ( 3) 00:07:42.933 24500.382 - 24601.206: 98.6736% ( 3) 00:07:42.933 24601.206 - 24702.031: 98.7073% ( 3) 00:07:42.933 24702.031 - 24802.855: 98.7410% ( 3) 00:07:42.933 24802.855 - 24903.680: 98.7747% ( 3) 00:07:42.933 24903.680 - 25004.505: 98.7972% ( 2) 00:07:42.933 25004.505 - 25105.329: 98.8309% ( 3) 00:07:42.933 25105.329 - 25206.154: 98.8534% ( 2) 00:07:42.933 25206.154 - 25306.978: 98.8871% ( 3) 00:07:42.933 25306.978 - 25407.803: 98.9209% ( 3) 00:07:42.933 25407.803 - 25508.628: 98.9546% ( 3) 00:07:42.933 25508.628 - 25609.452: 98.9996% ( 4) 00:07:42.933 25609.452 - 25710.277: 99.0333% ( 3) 00:07:42.933 25710.277 - 25811.102: 99.0670% ( 3) 00:07:42.933 25811.102 - 26012.751: 99.1457% ( 7) 00:07:42.933 26012.751 - 26214.400: 99.2131% ( 6) 00:07:42.933 26214.400 - 26416.049: 99.2806% ( 6) 00:07:42.933 32263.877 - 32465.526: 99.3368% ( 5) 00:07:42.933 32465.526 - 32667.175: 99.4042% ( 6) 00:07:42.933 32667.175 - 32868.825: 99.4717% ( 6) 00:07:42.933 32868.825 - 33070.474: 99.5391% ( 6) 00:07:42.933 33070.474 - 33272.123: 99.6178% ( 7) 00:07:42.933 33272.123 - 33473.772: 99.6853% ( 6) 00:07:42.933 33473.772 - 33675.422: 99.7527% ( 6) 00:07:42.933 33675.422 - 33877.071: 99.8314% ( 7) 00:07:42.933 33877.071 - 34078.720: 99.8988% ( 6) 00:07:42.933 34078.720 - 34280.369: 99.9775% ( 7) 00:07:42.933 34280.369 - 34482.018: 100.0000% ( 2) 00:07:42.933 00:07:42.933 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.933 ============================================================================== 00:07:42.933 Range in us Cumulative IO count 00:07:42.933 6553.600 - 6604.012: 0.0112% ( 1) 00:07:42.933 6654.425 - 6704.837: 0.0225% ( 1) 00:07:42.933 6704.837 - 6755.249: 0.0337% ( 1) 00:07:42.933 6755.249 - 6805.662: 0.1012% ( 6) 00:07:42.933 6805.662 - 6856.074: 0.2023% ( 9) 00:07:42.933 6856.074 - 6906.486: 0.3372% ( 12) 00:07:42.933 6906.486 - 6956.898: 0.9105% ( 51) 00:07:42.933 6956.898 - 7007.311: 1.1466% ( 21) 00:07:42.933 7007.311 - 7057.723: 1.3826% ( 21) 00:07:42.933 7057.723 - 7108.135: 1.6637% ( 25) 00:07:42.933 7108.135 - 7158.548: 2.2257% ( 50) 00:07:42.933 7158.548 - 7208.960: 2.4955% ( 24) 00:07:42.933 7208.960 - 7259.372: 2.6978% ( 18) 00:07:42.933 7259.372 - 7309.785: 2.7765% ( 7) 00:07:42.933 7309.785 - 7360.197: 2.8327% ( 5) 00:07:42.933 7360.197 - 7410.609: 2.8665% ( 3) 00:07:42.933 7410.609 - 7461.022: 2.8777% ( 1) 00:07:42.933 7965.145 - 8015.557: 2.8889% ( 1) 00:07:42.933 8116.382 - 8166.794: 2.9227% ( 3) 00:07:42.933 8166.794 - 8217.206: 2.9789% ( 5) 00:07:42.933 8217.206 - 8267.618: 3.0351% ( 5) 00:07:42.933 8267.618 - 8318.031: 3.3161% ( 25) 00:07:42.933 8318.031 - 8368.443: 3.4285% ( 10) 00:07:42.933 8368.443 - 8418.855: 3.4622% ( 3) 00:07:42.933 8418.855 - 8469.268: 3.5072% ( 4) 00:07:42.933 8469.268 - 8519.680: 3.5409% ( 3) 00:07:42.933 8519.680 - 8570.092: 3.5746% ( 3) 00:07:42.934 8570.092 - 8620.505: 3.5971% ( 2) 00:07:42.934 9124.628 - 9175.040: 3.6084% ( 1) 00:07:42.934 9175.040 - 9225.452: 3.6196% ( 1) 00:07:42.934 9225.452 - 9275.865: 3.6533% ( 3) 00:07:42.934 9275.865 - 9326.277: 3.7657% ( 10) 00:07:42.934 9326.277 - 9376.689: 3.8107% ( 4) 00:07:42.934 9376.689 - 9427.102: 3.8894% ( 7) 00:07:42.934 9427.102 - 9477.514: 3.9344% ( 4) 00:07:42.934 9477.514 - 9527.926: 3.9681% ( 3) 00:07:42.934 9527.926 - 9578.338: 3.9906% ( 2) 00:07:42.934 9578.338 - 9628.751: 4.0130% ( 2) 00:07:42.934 9628.751 - 9679.163: 4.0355% ( 2) 00:07:42.934 9679.163 - 9729.575: 4.0580% ( 2) 00:07:42.934 9729.575 - 9779.988: 4.0917% ( 3) 00:07:42.934 9779.988 - 9830.400: 4.1142% ( 2) 00:07:42.934 9830.400 - 9880.812: 4.1367% ( 2) 00:07:42.934 9880.812 - 9931.225: 4.1592% ( 2) 00:07:42.934 9931.225 - 9981.637: 4.1929% ( 3) 00:07:42.934 9981.637 - 10032.049: 4.2154% ( 2) 00:07:42.934 10032.049 - 10082.462: 4.2379% ( 2) 00:07:42.934 10082.462 - 10132.874: 4.3053% ( 6) 00:07:42.934 10132.874 - 10183.286: 4.4177% ( 10) 00:07:42.934 10183.286 - 10233.698: 4.5414% ( 11) 00:07:42.934 10233.698 - 10284.111: 4.6875% ( 13) 00:07:42.934 10284.111 - 10334.523: 4.8786% ( 17) 00:07:42.934 10334.523 - 10384.935: 5.0247% ( 13) 00:07:42.934 10384.935 - 10435.348: 5.1596% ( 12) 00:07:42.934 10435.348 - 10485.760: 5.2833% ( 11) 00:07:42.934 10485.760 - 10536.172: 5.3957% ( 10) 00:07:42.934 10536.172 - 10586.585: 5.5418% ( 13) 00:07:42.934 10586.585 - 10636.997: 5.6992% ( 14) 00:07:42.934 10636.997 - 10687.409: 5.8116% ( 10) 00:07:42.934 10687.409 - 10737.822: 5.8903% ( 7) 00:07:42.934 10737.822 - 10788.234: 5.9690% ( 7) 00:07:42.934 10788.234 - 10838.646: 6.0252% ( 5) 00:07:42.934 10838.646 - 10889.058: 6.1713% ( 13) 00:07:42.934 10889.058 - 10939.471: 6.3287% ( 14) 00:07:42.934 10939.471 - 10989.883: 6.5647% ( 21) 00:07:42.934 10989.883 - 11040.295: 6.7558% ( 17) 00:07:42.934 11040.295 - 11090.708: 6.9919% ( 21) 00:07:42.934 11090.708 - 11141.120: 7.2504% ( 23) 00:07:42.934 11141.120 - 11191.532: 7.4415% ( 17) 00:07:42.934 11191.532 - 11241.945: 7.8687% ( 38) 00:07:42.934 11241.945 - 11292.357: 8.1385% ( 24) 00:07:42.934 11292.357 - 11342.769: 8.5094% ( 33) 00:07:42.934 11342.769 - 11393.182: 8.9254% ( 37) 00:07:42.934 11393.182 - 11443.594: 9.4537% ( 47) 00:07:42.934 11443.594 - 11494.006: 9.8359% ( 34) 00:07:42.934 11494.006 - 11544.418: 10.2968% ( 41) 00:07:42.934 11544.418 - 11594.831: 10.5890% ( 26) 00:07:42.934 11594.831 - 11645.243: 10.9825% ( 35) 00:07:42.934 11645.243 - 11695.655: 11.3647% ( 34) 00:07:42.934 11695.655 - 11746.068: 11.6569% ( 26) 00:07:42.934 11746.068 - 11796.480: 11.9829% ( 29) 00:07:42.934 11796.480 - 11846.892: 12.2527% ( 24) 00:07:42.934 11846.892 - 11897.305: 12.6124% ( 32) 00:07:42.934 11897.305 - 11947.717: 13.2194% ( 54) 00:07:42.934 11947.717 - 11998.129: 13.6016% ( 34) 00:07:42.934 11998.129 - 12048.542: 13.9501% ( 31) 00:07:42.934 12048.542 - 12098.954: 14.2761% ( 29) 00:07:42.934 12098.954 - 12149.366: 14.7032% ( 38) 00:07:42.934 12149.366 - 12199.778: 15.2765% ( 51) 00:07:42.934 12199.778 - 12250.191: 15.9510% ( 60) 00:07:42.934 12250.191 - 12300.603: 16.7828% ( 74) 00:07:42.934 12300.603 - 12351.015: 18.2666% ( 132) 00:07:42.934 12351.015 - 12401.428: 19.4020% ( 101) 00:07:42.934 12401.428 - 12451.840: 20.3125% ( 81) 00:07:42.934 12451.840 - 12502.252: 21.3692% ( 94) 00:07:42.934 12502.252 - 12552.665: 22.8417% ( 131) 00:07:42.934 12552.665 - 12603.077: 24.1682% ( 118) 00:07:42.934 12603.077 - 12653.489: 25.4946% ( 118) 00:07:42.934 12653.489 - 12703.902: 26.7536% ( 112) 00:07:42.934 12703.902 - 12754.314: 27.9564% ( 107) 00:07:42.934 12754.314 - 12804.726: 29.1254% ( 104) 00:07:42.934 12804.726 - 12855.138: 30.5755% ( 129) 00:07:42.934 12855.138 - 12905.551: 31.7109% ( 101) 00:07:42.934 12905.551 - 13006.375: 34.2851% ( 229) 00:07:42.934 13006.375 - 13107.200: 36.1623% ( 167) 00:07:42.934 13107.200 - 13208.025: 37.6349% ( 131) 00:07:42.934 13208.025 - 13308.849: 39.5459% ( 170) 00:07:42.934 13308.849 - 13409.674: 41.2433% ( 151) 00:07:42.934 13409.674 - 13510.498: 42.9969% ( 156) 00:07:42.934 13510.498 - 13611.323: 44.3570% ( 121) 00:07:42.934 13611.323 - 13712.148: 45.7059% ( 120) 00:07:42.934 13712.148 - 13812.972: 46.8862% ( 105) 00:07:42.934 13812.972 - 13913.797: 48.1452% ( 112) 00:07:42.934 13913.797 - 14014.622: 49.8876% ( 155) 00:07:42.934 14014.622 - 14115.446: 51.1241% ( 110) 00:07:42.934 14115.446 - 14216.271: 52.2370% ( 99) 00:07:42.934 14216.271 - 14317.095: 53.4060% ( 104) 00:07:42.934 14317.095 - 14417.920: 54.2828% ( 78) 00:07:42.934 14417.920 - 14518.745: 55.3957% ( 99) 00:07:42.934 14518.745 - 14619.569: 56.4411% ( 93) 00:07:42.934 14619.569 - 14720.394: 57.7001% ( 112) 00:07:42.934 14720.394 - 14821.218: 58.9703% ( 113) 00:07:42.934 14821.218 - 14922.043: 60.5103% ( 137) 00:07:42.934 14922.043 - 15022.868: 62.0616% ( 138) 00:07:42.934 15022.868 - 15123.692: 63.2644% ( 107) 00:07:42.934 15123.692 - 15224.517: 64.5796% ( 117) 00:07:42.934 15224.517 - 15325.342: 65.9622% ( 123) 00:07:42.934 15325.342 - 15426.166: 67.4123% ( 129) 00:07:42.934 15426.166 - 15526.991: 68.9186% ( 134) 00:07:42.934 15526.991 - 15627.815: 70.0989% ( 105) 00:07:42.934 15627.815 - 15728.640: 71.3916% ( 115) 00:07:42.934 15728.640 - 15829.465: 72.6506% ( 112) 00:07:42.934 15829.465 - 15930.289: 73.7522% ( 98) 00:07:42.934 15930.289 - 16031.114: 74.9213% ( 104) 00:07:42.934 16031.114 - 16131.938: 76.0117% ( 97) 00:07:42.934 16131.938 - 16232.763: 77.0571% ( 93) 00:07:42.934 16232.763 - 16333.588: 78.2711% ( 108) 00:07:42.934 16333.588 - 16434.412: 79.6201% ( 120) 00:07:42.934 16434.412 - 16535.237: 80.7104% ( 97) 00:07:42.934 16535.237 - 16636.062: 81.8795% ( 104) 00:07:42.934 16636.062 - 16736.886: 83.0935% ( 108) 00:07:42.934 16736.886 - 16837.711: 84.3862% ( 115) 00:07:42.934 16837.711 - 16938.535: 85.4991% ( 99) 00:07:42.934 16938.535 - 17039.360: 86.4658% ( 86) 00:07:42.934 17039.360 - 17140.185: 87.1853% ( 64) 00:07:42.934 17140.185 - 17241.009: 87.8485% ( 59) 00:07:42.934 17241.009 - 17341.834: 88.5004% ( 58) 00:07:42.934 17341.834 - 17442.658: 88.9838% ( 43) 00:07:42.934 17442.658 - 17543.483: 89.5346% ( 49) 00:07:42.934 17543.483 - 17644.308: 90.1416% ( 54) 00:07:42.934 17644.308 - 17745.132: 90.6587% ( 46) 00:07:42.934 17745.132 - 17845.957: 91.2433% ( 52) 00:07:42.934 17845.957 - 17946.782: 91.7266% ( 43) 00:07:42.934 17946.782 - 18047.606: 92.1987% ( 42) 00:07:42.934 18047.606 - 18148.431: 92.6933% ( 44) 00:07:42.934 18148.431 - 18249.255: 93.2217% ( 47) 00:07:42.934 18249.255 - 18350.080: 93.6938% ( 42) 00:07:42.934 18350.080 - 18450.905: 94.0985% ( 36) 00:07:42.934 18450.905 - 18551.729: 94.5256% ( 38) 00:07:42.934 18551.729 - 18652.554: 94.8516% ( 29) 00:07:42.934 18652.554 - 18753.378: 95.1551% ( 27) 00:07:42.934 18753.378 - 18854.203: 95.5036% ( 31) 00:07:42.934 18854.203 - 18955.028: 95.8183% ( 28) 00:07:42.934 18955.028 - 19055.852: 96.0656% ( 22) 00:07:42.934 19055.852 - 19156.677: 96.3467% ( 25) 00:07:42.934 19156.677 - 19257.502: 96.5603% ( 19) 00:07:42.934 19257.502 - 19358.326: 96.8525% ( 26) 00:07:42.934 19358.326 - 19459.151: 97.0661% ( 19) 00:07:42.934 19459.151 - 19559.975: 97.1785% ( 10) 00:07:42.934 19559.975 - 19660.800: 97.3022% ( 11) 00:07:42.934 19660.800 - 19761.625: 97.5270% ( 20) 00:07:42.934 19761.625 - 19862.449: 97.6844% ( 14) 00:07:42.934 19862.449 - 19963.274: 97.8642% ( 16) 00:07:42.934 19963.274 - 20064.098: 98.0216% ( 14) 00:07:42.934 20064.098 - 20164.923: 98.1790% ( 14) 00:07:42.934 20164.923 - 20265.748: 98.3026% ( 11) 00:07:42.934 20265.748 - 20366.572: 98.3588% ( 5) 00:07:42.934 20366.572 - 20467.397: 98.4150% ( 5) 00:07:42.934 20467.397 - 20568.222: 98.4600% ( 4) 00:07:42.934 20568.222 - 20669.046: 98.5162% ( 5) 00:07:42.934 20669.046 - 20769.871: 98.5612% ( 4) 00:07:42.934 22887.188 - 22988.012: 98.5724% ( 1) 00:07:42.934 22988.012 - 23088.837: 98.6061% ( 3) 00:07:42.934 23088.837 - 23189.662: 98.6398% ( 3) 00:07:42.934 23189.662 - 23290.486: 98.6736% ( 3) 00:07:42.934 23290.486 - 23391.311: 98.7073% ( 3) 00:07:42.934 23391.311 - 23492.135: 98.7410% ( 3) 00:07:42.934 23492.135 - 23592.960: 98.7747% ( 3) 00:07:42.934 23592.960 - 23693.785: 98.8085% ( 3) 00:07:42.934 23693.785 - 23794.609: 98.8422% ( 3) 00:07:42.934 23794.609 - 23895.434: 98.8647% ( 2) 00:07:42.934 23895.434 - 23996.258: 98.8984% ( 3) 00:07:42.934 23996.258 - 24097.083: 98.9321% ( 3) 00:07:42.934 24097.083 - 24197.908: 98.9658% ( 3) 00:07:42.934 24197.908 - 24298.732: 98.9996% ( 3) 00:07:42.934 24298.732 - 24399.557: 99.0333% ( 3) 00:07:42.934 24399.557 - 24500.382: 99.0558% ( 2) 00:07:42.934 24500.382 - 24601.206: 99.1007% ( 4) 00:07:42.934 24601.206 - 24702.031: 99.1344% ( 3) 00:07:42.934 24702.031 - 24802.855: 99.1682% ( 3) 00:07:42.934 24802.855 - 24903.680: 99.2019% ( 3) 00:07:42.934 24903.680 - 25004.505: 99.2469% ( 4) 00:07:42.934 25004.505 - 25105.329: 99.2806% ( 3) 00:07:42.934 30852.332 - 31053.982: 99.3255% ( 4) 00:07:42.934 31053.982 - 31255.631: 99.3930% ( 6) 00:07:42.934 31255.631 - 31457.280: 99.4604% ( 6) 00:07:42.934 31457.280 - 31658.929: 99.5279% ( 6) 00:07:42.934 31658.929 - 31860.578: 99.5841% ( 5) 00:07:42.934 31860.578 - 32062.228: 99.6403% ( 5) 00:07:42.934 32062.228 - 32263.877: 99.6965% ( 5) 00:07:42.935 32263.877 - 32465.526: 99.7639% ( 6) 00:07:42.935 32465.526 - 32667.175: 99.8426% ( 7) 00:07:42.935 32667.175 - 32868.825: 99.9101% ( 6) 00:07:42.935 32868.825 - 33070.474: 99.9888% ( 7) 00:07:42.935 33070.474 - 33272.123: 100.0000% ( 1) 00:07:42.935 00:07:42.935 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.935 ============================================================================== 00:07:42.935 Range in us Cumulative IO count 00:07:42.935 6503.188 - 6553.600: 0.0223% ( 2) 00:07:42.935 6704.837 - 6755.249: 0.0335% ( 1) 00:07:42.935 6755.249 - 6805.662: 0.1228% ( 8) 00:07:42.935 6805.662 - 6856.074: 0.2344% ( 10) 00:07:42.935 6856.074 - 6906.486: 0.3683% ( 12) 00:07:42.935 6906.486 - 6956.898: 0.7701% ( 36) 00:07:42.935 6956.898 - 7007.311: 1.2946% ( 47) 00:07:42.935 7007.311 - 7057.723: 1.7188% ( 38) 00:07:42.935 7057.723 - 7108.135: 1.9420% ( 20) 00:07:42.935 7108.135 - 7158.548: 2.0871% ( 13) 00:07:42.935 7158.548 - 7208.960: 2.2433% ( 14) 00:07:42.935 7208.960 - 7259.372: 2.5558% ( 28) 00:07:42.935 7259.372 - 7309.785: 2.6786% ( 11) 00:07:42.935 7309.785 - 7360.197: 2.7232% ( 4) 00:07:42.935 7360.197 - 7410.609: 2.7567% ( 3) 00:07:42.935 7410.609 - 7461.022: 2.7902% ( 3) 00:07:42.935 7461.022 - 7511.434: 2.8348% ( 4) 00:07:42.935 7511.434 - 7561.846: 2.8571% ( 2) 00:07:42.935 8015.557 - 8065.969: 2.8683% ( 1) 00:07:42.935 8065.969 - 8116.382: 2.8795% ( 1) 00:07:42.935 8267.618 - 8318.031: 2.8906% ( 1) 00:07:42.935 8318.031 - 8368.443: 2.9353% ( 4) 00:07:42.935 8368.443 - 8418.855: 3.0022% ( 6) 00:07:42.935 8418.855 - 8469.268: 3.2478% ( 22) 00:07:42.935 8469.268 - 8519.680: 3.5714% ( 29) 00:07:42.935 8519.680 - 8570.092: 3.7054% ( 12) 00:07:42.935 8570.092 - 8620.505: 3.7835% ( 7) 00:07:42.935 8620.505 - 8670.917: 3.8281% ( 4) 00:07:42.935 8670.917 - 8721.329: 3.8951% ( 6) 00:07:42.935 8721.329 - 8771.742: 3.9397% ( 4) 00:07:42.935 8771.742 - 8822.154: 3.9621% ( 2) 00:07:42.935 8822.154 - 8872.566: 3.9844% ( 2) 00:07:42.935 8872.566 - 8922.978: 4.0067% ( 2) 00:07:42.935 8922.978 - 8973.391: 4.0402% ( 3) 00:07:42.935 8973.391 - 9023.803: 4.0513% ( 1) 00:07:42.935 9023.803 - 9074.215: 4.0625% ( 1) 00:07:42.935 9074.215 - 9124.628: 4.0960% ( 3) 00:07:42.935 9124.628 - 9175.040: 4.1183% ( 2) 00:07:42.935 9175.040 - 9225.452: 4.1406% ( 2) 00:07:42.935 9225.452 - 9275.865: 4.1741% ( 3) 00:07:42.935 9275.865 - 9326.277: 4.1964% ( 2) 00:07:42.935 9326.277 - 9376.689: 4.2188% ( 2) 00:07:42.935 9376.689 - 9427.102: 4.2411% ( 2) 00:07:42.935 9427.102 - 9477.514: 4.2634% ( 2) 00:07:42.935 9477.514 - 9527.926: 4.2857% ( 2) 00:07:42.935 10032.049 - 10082.462: 4.3080% ( 2) 00:07:42.935 10082.462 - 10132.874: 4.3527% ( 4) 00:07:42.935 10132.874 - 10183.286: 4.3973% ( 4) 00:07:42.935 10183.286 - 10233.698: 4.4531% ( 5) 00:07:42.935 10233.698 - 10284.111: 4.4978% ( 4) 00:07:42.935 10284.111 - 10334.523: 4.5424% ( 4) 00:07:42.935 10334.523 - 10384.935: 4.5871% ( 4) 00:07:42.935 10384.935 - 10435.348: 4.6205% ( 3) 00:07:42.935 10435.348 - 10485.760: 4.7210% ( 9) 00:07:42.935 10485.760 - 10536.172: 4.7545% ( 3) 00:07:42.935 10536.172 - 10586.585: 4.7879% ( 3) 00:07:42.935 10586.585 - 10636.997: 4.8549% ( 6) 00:07:42.935 10636.997 - 10687.409: 4.9777% ( 11) 00:07:42.935 10687.409 - 10737.822: 5.1674% ( 17) 00:07:42.935 10737.822 - 10788.234: 5.4576% ( 26) 00:07:42.935 10788.234 - 10838.646: 5.8482% ( 35) 00:07:42.935 10838.646 - 10889.058: 6.0156% ( 15) 00:07:42.935 10889.058 - 10939.471: 6.1830% ( 15) 00:07:42.935 10939.471 - 10989.883: 6.4062% ( 20) 00:07:42.935 10989.883 - 11040.295: 6.7188% ( 28) 00:07:42.935 11040.295 - 11090.708: 6.9420% ( 20) 00:07:42.935 11090.708 - 11141.120: 7.2098% ( 24) 00:07:42.935 11141.120 - 11191.532: 7.6339% ( 38) 00:07:42.935 11191.532 - 11241.945: 8.0246% ( 35) 00:07:42.935 11241.945 - 11292.357: 8.5938% ( 51) 00:07:42.935 11292.357 - 11342.769: 9.0290% ( 39) 00:07:42.935 11342.769 - 11393.182: 9.4308% ( 36) 00:07:42.935 11393.182 - 11443.594: 9.8438% ( 37) 00:07:42.935 11443.594 - 11494.006: 10.1116% ( 24) 00:07:42.935 11494.006 - 11544.418: 10.4911% ( 34) 00:07:42.935 11544.418 - 11594.831: 10.7812% ( 26) 00:07:42.935 11594.831 - 11645.243: 10.9821% ( 18) 00:07:42.935 11645.243 - 11695.655: 11.2054% ( 20) 00:07:42.935 11695.655 - 11746.068: 11.4397% ( 21) 00:07:42.935 11746.068 - 11796.480: 11.7076% ( 24) 00:07:42.935 11796.480 - 11846.892: 12.1094% ( 36) 00:07:42.935 11846.892 - 11897.305: 12.5000% ( 35) 00:07:42.935 11897.305 - 11947.717: 12.9688% ( 42) 00:07:42.935 11947.717 - 11998.129: 13.4933% ( 47) 00:07:42.935 11998.129 - 12048.542: 14.2299% ( 66) 00:07:42.935 12048.542 - 12098.954: 14.6652% ( 39) 00:07:42.935 12098.954 - 12149.366: 15.2121% ( 49) 00:07:42.935 12149.366 - 12199.778: 15.9152% ( 63) 00:07:42.935 12199.778 - 12250.191: 16.8304% ( 82) 00:07:42.935 12250.191 - 12300.603: 17.7455% ( 82) 00:07:42.935 12300.603 - 12351.015: 18.8058% ( 95) 00:07:42.935 12351.015 - 12401.428: 19.6652% ( 77) 00:07:42.935 12401.428 - 12451.840: 20.6362% ( 87) 00:07:42.935 12451.840 - 12502.252: 21.8862% ( 112) 00:07:42.935 12502.252 - 12552.665: 22.9241% ( 93) 00:07:42.935 12552.665 - 12603.077: 24.0179% ( 98) 00:07:42.935 12603.077 - 12653.489: 24.9777% ( 86) 00:07:42.935 12653.489 - 12703.902: 26.1272% ( 103) 00:07:42.935 12703.902 - 12754.314: 27.2098% ( 97) 00:07:42.935 12754.314 - 12804.726: 28.3371% ( 101) 00:07:42.935 12804.726 - 12855.138: 29.4978% ( 104) 00:07:42.935 12855.138 - 12905.551: 30.4576% ( 86) 00:07:42.935 12905.551 - 13006.375: 32.3996% ( 174) 00:07:42.935 13006.375 - 13107.200: 34.1183% ( 154) 00:07:42.935 13107.200 - 13208.025: 36.1384% ( 181) 00:07:42.935 13208.025 - 13308.849: 38.2254% ( 187) 00:07:42.935 13308.849 - 13409.674: 40.0223% ( 161) 00:07:42.935 13409.674 - 13510.498: 41.8973% ( 168) 00:07:42.935 13510.498 - 13611.323: 44.1295% ( 200) 00:07:42.935 13611.323 - 13712.148: 45.8817% ( 157) 00:07:42.935 13712.148 - 13812.972: 47.9018% ( 181) 00:07:42.935 13812.972 - 13913.797: 49.4531% ( 139) 00:07:42.935 13913.797 - 14014.622: 51.1719% ( 154) 00:07:42.935 14014.622 - 14115.446: 52.7121% ( 138) 00:07:42.935 14115.446 - 14216.271: 54.0067% ( 116) 00:07:42.935 14216.271 - 14317.095: 54.9888% ( 88) 00:07:42.935 14317.095 - 14417.920: 55.8371% ( 76) 00:07:42.935 14417.920 - 14518.745: 56.8527% ( 91) 00:07:42.935 14518.745 - 14619.569: 57.8348% ( 88) 00:07:42.935 14619.569 - 14720.394: 59.0737% ( 111) 00:07:42.935 14720.394 - 14821.218: 60.3348% ( 113) 00:07:42.935 14821.218 - 14922.043: 61.0938% ( 68) 00:07:42.935 14922.043 - 15022.868: 62.0759% ( 88) 00:07:42.935 15022.868 - 15123.692: 63.0580% ( 88) 00:07:42.935 15123.692 - 15224.517: 64.4420% ( 124) 00:07:42.935 15224.517 - 15325.342: 65.8705% ( 128) 00:07:42.935 15325.342 - 15426.166: 67.0536% ( 106) 00:07:42.935 15426.166 - 15526.991: 68.3705% ( 118) 00:07:42.935 15526.991 - 15627.815: 69.9107% ( 138) 00:07:42.935 15627.815 - 15728.640: 71.2946% ( 124) 00:07:42.935 15728.640 - 15829.465: 73.0134% ( 154) 00:07:42.935 15829.465 - 15930.289: 74.5982% ( 142) 00:07:42.935 15930.289 - 16031.114: 75.7924% ( 107) 00:07:42.935 16031.114 - 16131.938: 76.9085% ( 100) 00:07:42.935 16131.938 - 16232.763: 77.8348% ( 83) 00:07:42.935 16232.763 - 16333.588: 78.7165% ( 79) 00:07:42.935 16333.588 - 16434.412: 79.5871% ( 78) 00:07:42.935 16434.412 - 16535.237: 80.6027% ( 91) 00:07:42.935 16535.237 - 16636.062: 81.5737% ( 87) 00:07:42.935 16636.062 - 16736.886: 82.7344% ( 104) 00:07:42.935 16736.886 - 16837.711: 84.2634% ( 137) 00:07:42.935 16837.711 - 16938.535: 85.1786% ( 82) 00:07:42.935 16938.535 - 17039.360: 86.1607% ( 88) 00:07:42.935 17039.360 - 17140.185: 87.0982% ( 84) 00:07:42.935 17140.185 - 17241.009: 87.8348% ( 66) 00:07:42.935 17241.009 - 17341.834: 88.6496% ( 73) 00:07:42.935 17341.834 - 17442.658: 89.5201% ( 78) 00:07:42.935 17442.658 - 17543.483: 90.4799% ( 86) 00:07:42.935 17543.483 - 17644.308: 91.1830% ( 63) 00:07:42.935 17644.308 - 17745.132: 91.8415% ( 59) 00:07:42.935 17745.132 - 17845.957: 92.4219% ( 52) 00:07:42.935 17845.957 - 17946.782: 92.9799% ( 50) 00:07:42.935 17946.782 - 18047.606: 93.6049% ( 56) 00:07:42.935 18047.606 - 18148.431: 94.1964% ( 53) 00:07:42.935 18148.431 - 18249.255: 94.7321% ( 48) 00:07:42.935 18249.255 - 18350.080: 95.1562% ( 38) 00:07:42.935 18350.080 - 18450.905: 95.4464% ( 26) 00:07:42.935 18450.905 - 18551.729: 95.6473% ( 18) 00:07:42.935 18551.729 - 18652.554: 95.8482% ( 18) 00:07:42.935 18652.554 - 18753.378: 96.0379% ( 17) 00:07:42.935 18753.378 - 18854.203: 96.2054% ( 15) 00:07:42.935 18854.203 - 18955.028: 96.4509% ( 22) 00:07:42.935 18955.028 - 19055.852: 96.6629% ( 19) 00:07:42.935 19055.852 - 19156.677: 97.0424% ( 34) 00:07:42.935 19156.677 - 19257.502: 97.3772% ( 30) 00:07:42.935 19257.502 - 19358.326: 97.6004% ( 20) 00:07:42.935 19358.326 - 19459.151: 98.0469% ( 40) 00:07:42.935 19459.151 - 19559.975: 98.2924% ( 22) 00:07:42.935 19559.975 - 19660.800: 98.5045% ( 19) 00:07:42.935 19660.800 - 19761.625: 98.6496% ( 13) 00:07:42.935 19761.625 - 19862.449: 98.8058% ( 14) 00:07:42.935 19862.449 - 19963.274: 98.9732% ( 15) 00:07:42.935 19963.274 - 20064.098: 99.1406% ( 15) 00:07:42.935 20064.098 - 20164.923: 99.2634% ( 11) 00:07:42.935 20164.923 - 20265.748: 99.2857% ( 2) 00:07:42.936 22786.363 - 22887.188: 99.2969% ( 1) 00:07:42.936 22887.188 - 22988.012: 99.3304% ( 3) 00:07:42.936 22988.012 - 23088.837: 99.3638% ( 3) 00:07:42.936 23088.837 - 23189.662: 99.3973% ( 3) 00:07:42.936 23189.662 - 23290.486: 99.4420% ( 4) 00:07:42.936 23290.486 - 23391.311: 99.4754% ( 3) 00:07:42.936 23391.311 - 23492.135: 99.5089% ( 3) 00:07:42.936 23492.135 - 23592.960: 99.5424% ( 3) 00:07:42.936 23592.960 - 23693.785: 99.5759% ( 3) 00:07:42.936 23693.785 - 23794.609: 99.6094% ( 3) 00:07:42.936 23794.609 - 23895.434: 99.6429% ( 3) 00:07:42.936 23895.434 - 23996.258: 99.6763% ( 3) 00:07:42.936 23996.258 - 24097.083: 99.7098% ( 3) 00:07:42.936 24097.083 - 24197.908: 99.7433% ( 3) 00:07:42.936 24197.908 - 24298.732: 99.7656% ( 2) 00:07:42.936 24298.732 - 24399.557: 99.7991% ( 3) 00:07:42.936 24399.557 - 24500.382: 99.8326% ( 3) 00:07:42.936 24500.382 - 24601.206: 99.8661% ( 3) 00:07:42.936 24601.206 - 24702.031: 99.8996% ( 3) 00:07:42.936 24702.031 - 24802.855: 99.9330% ( 3) 00:07:42.936 24802.855 - 24903.680: 99.9665% ( 3) 00:07:42.936 24903.680 - 25004.505: 100.0000% ( 3) 00:07:42.936 00:07:42.936 10:06:48 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:42.936 00:07:42.936 real 0m2.529s 00:07:42.936 user 0m2.188s 00:07:42.936 sys 0m0.215s 00:07:42.936 10:06:48 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.936 10:06:48 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.936 ************************************ 00:07:42.936 END TEST nvme_perf 00:07:42.936 ************************************ 00:07:42.936 10:06:48 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:42.936 10:06:48 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:42.936 10:06:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.936 10:06:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.936 ************************************ 00:07:42.936 START TEST nvme_hello_world 00:07:42.936 ************************************ 00:07:42.936 10:06:48 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:42.936 Initializing NVMe Controllers 00:07:42.936 Attached to 0000:00:10.0 00:07:42.936 Namespace ID: 1 size: 6GB 00:07:42.936 Attached to 0000:00:11.0 00:07:42.936 Namespace ID: 1 size: 5GB 00:07:42.936 Attached to 0000:00:13.0 00:07:42.936 Namespace ID: 1 size: 1GB 00:07:42.936 Attached to 0000:00:12.0 00:07:42.936 Namespace ID: 1 size: 4GB 00:07:42.936 Namespace ID: 2 size: 4GB 00:07:42.936 Namespace ID: 3 size: 4GB 00:07:42.936 Initialization complete. 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 INFO: using host memory buffer for IO 00:07:42.936 Hello world! 00:07:42.936 00:07:42.936 real 0m0.230s 00:07:42.936 user 0m0.089s 00:07:42.936 sys 0m0.097s 00:07:42.936 10:06:49 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.936 10:06:49 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:42.936 ************************************ 00:07:42.936 END TEST nvme_hello_world 00:07:42.936 ************************************ 00:07:43.193 10:06:49 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:43.194 10:06:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.194 10:06:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.194 10:06:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.194 ************************************ 00:07:43.194 START TEST nvme_sgl 00:07:43.194 ************************************ 00:07:43.194 10:06:49 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:43.194 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:43.194 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:43.194 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:43.451 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:43.451 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:43.451 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:43.451 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:43.451 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:43.451 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:43.451 NVMe Readv/Writev Request test 00:07:43.451 Attached to 0000:00:10.0 00:07:43.451 Attached to 0000:00:11.0 00:07:43.451 Attached to 0000:00:13.0 00:07:43.451 Attached to 0000:00:12.0 00:07:43.451 0000:00:10.0: build_io_request_2 test passed 00:07:43.451 0000:00:10.0: build_io_request_4 test passed 00:07:43.451 0000:00:10.0: build_io_request_5 test passed 00:07:43.451 0000:00:10.0: build_io_request_6 test passed 00:07:43.451 0000:00:10.0: build_io_request_7 test passed 00:07:43.451 0000:00:10.0: build_io_request_10 test passed 00:07:43.451 0000:00:11.0: build_io_request_2 test passed 00:07:43.451 0000:00:11.0: build_io_request_4 test passed 00:07:43.451 0000:00:11.0: build_io_request_5 test passed 00:07:43.451 0000:00:11.0: build_io_request_6 test passed 00:07:43.451 0000:00:11.0: build_io_request_7 test passed 00:07:43.452 0000:00:11.0: build_io_request_10 test passed 00:07:43.452 Cleaning up... 00:07:43.452 00:07:43.452 real 0m0.323s 00:07:43.452 user 0m0.170s 00:07:43.452 sys 0m0.103s 00:07:43.452 ************************************ 00:07:43.452 END TEST nvme_sgl 00:07:43.452 ************************************ 00:07:43.452 10:06:49 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.452 10:06:49 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:43.452 10:06:49 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:43.452 10:06:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.452 10:06:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.452 10:06:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.452 ************************************ 00:07:43.452 START TEST nvme_e2edp 00:07:43.452 ************************************ 00:07:43.452 10:06:49 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:43.710 NVMe Write/Read with End-to-End data protection test 00:07:43.710 Attached to 0000:00:10.0 00:07:43.710 Attached to 0000:00:11.0 00:07:43.710 Attached to 0000:00:13.0 00:07:43.710 Attached to 0000:00:12.0 00:07:43.710 Cleaning up... 00:07:43.710 00:07:43.710 real 0m0.217s 00:07:43.710 user 0m0.077s 00:07:43.710 sys 0m0.097s 00:07:43.710 10:06:49 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.710 10:06:49 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:43.710 ************************************ 00:07:43.710 END TEST nvme_e2edp 00:07:43.710 ************************************ 00:07:43.710 10:06:49 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:43.710 10:06:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.710 10:06:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.710 10:06:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.710 ************************************ 00:07:43.710 START TEST nvme_reserve 00:07:43.710 ************************************ 00:07:43.710 10:06:49 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:43.968 ===================================================== 00:07:43.968 NVMe Controller at PCI bus 0, device 16, function 0 00:07:43.968 ===================================================== 00:07:43.968 Reservations: Not Supported 00:07:43.968 ===================================================== 00:07:43.968 NVMe Controller at PCI bus 0, device 17, function 0 00:07:43.968 ===================================================== 00:07:43.968 Reservations: Not Supported 00:07:43.968 ===================================================== 00:07:43.968 NVMe Controller at PCI bus 0, device 19, function 0 00:07:43.968 ===================================================== 00:07:43.968 Reservations: Not Supported 00:07:43.968 ===================================================== 00:07:43.968 NVMe Controller at PCI bus 0, device 18, function 0 00:07:43.968 ===================================================== 00:07:43.968 Reservations: Not Supported 00:07:43.968 Reservation test passed 00:07:43.968 00:07:43.968 real 0m0.204s 00:07:43.968 user 0m0.070s 00:07:43.968 sys 0m0.092s 00:07:43.968 10:06:50 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.968 10:06:50 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:43.968 ************************************ 00:07:43.968 END TEST nvme_reserve 00:07:43.968 ************************************ 00:07:43.968 10:06:50 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:43.968 10:06:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.968 10:06:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.968 10:06:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.968 ************************************ 00:07:43.968 START TEST nvme_err_injection 00:07:43.968 ************************************ 00:07:43.968 10:06:50 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:44.225 NVMe Error Injection test 00:07:44.225 Attached to 0000:00:10.0 00:07:44.225 Attached to 0000:00:11.0 00:07:44.225 Attached to 0000:00:13.0 00:07:44.225 Attached to 0000:00:12.0 00:07:44.225 0000:00:10.0: get features failed as expected 00:07:44.225 0000:00:11.0: get features failed as expected 00:07:44.225 0000:00:13.0: get features failed as expected 00:07:44.225 0000:00:12.0: get features failed as expected 00:07:44.225 0000:00:12.0: get features successfully as expected 00:07:44.225 0000:00:10.0: get features successfully as expected 00:07:44.225 0000:00:11.0: get features successfully as expected 00:07:44.225 0000:00:13.0: get features successfully as expected 00:07:44.225 0000:00:10.0: read failed as expected 00:07:44.225 0000:00:11.0: read failed as expected 00:07:44.225 0000:00:12.0: read failed as expected 00:07:44.225 0000:00:13.0: read failed as expected 00:07:44.225 0000:00:10.0: read successfully as expected 00:07:44.225 0000:00:11.0: read successfully as expected 00:07:44.225 0000:00:13.0: read successfully as expected 00:07:44.225 0000:00:12.0: read successfully as expected 00:07:44.225 Cleaning up... 00:07:44.225 00:07:44.225 real 0m0.226s 00:07:44.225 user 0m0.084s 00:07:44.225 sys 0m0.098s 00:07:44.225 10:06:50 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.225 10:06:50 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:44.225 ************************************ 00:07:44.225 END TEST nvme_err_injection 00:07:44.225 ************************************ 00:07:44.225 10:06:50 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:44.225 10:06:50 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:44.225 10:06:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.225 10:06:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.225 ************************************ 00:07:44.225 START TEST nvme_overhead 00:07:44.225 ************************************ 00:07:44.225 10:06:50 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:45.598 Initializing NVMe Controllers 00:07:45.598 Attached to 0000:00:10.0 00:07:45.598 Attached to 0000:00:11.0 00:07:45.598 Attached to 0000:00:13.0 00:07:45.598 Attached to 0000:00:12.0 00:07:45.598 Initialization complete. Launching workers. 00:07:45.598 submit (in ns) avg, min, max = 12469.0, 9730.8, 126272.3 00:07:45.598 complete (in ns) avg, min, max = 8211.4, 7409.2, 283030.0 00:07:45.598 00:07:45.598 Submit histogram 00:07:45.598 ================ 00:07:45.598 Range in us Cumulative Count 00:07:45.598 9.698 - 9.748: 0.0274% ( 1) 00:07:45.598 10.338 - 10.388: 0.0822% ( 2) 00:07:45.598 10.388 - 10.437: 0.1096% ( 1) 00:07:45.598 10.486 - 10.535: 0.1370% ( 1) 00:07:45.598 10.535 - 10.585: 0.1918% ( 2) 00:07:45.598 10.585 - 10.634: 0.2192% ( 1) 00:07:45.598 10.634 - 10.683: 0.2740% ( 2) 00:07:45.598 10.683 - 10.732: 0.3563% ( 3) 00:07:45.598 10.732 - 10.782: 0.4111% ( 2) 00:07:45.598 10.782 - 10.831: 0.4933% ( 3) 00:07:45.598 10.831 - 10.880: 0.5755% ( 3) 00:07:45.598 10.880 - 10.929: 0.7399% ( 6) 00:07:45.598 10.929 - 10.978: 1.0140% ( 10) 00:07:45.598 10.978 - 11.028: 1.2606% ( 9) 00:07:45.598 11.028 - 11.077: 1.7539% ( 18) 00:07:45.598 11.077 - 11.126: 2.8501% ( 40) 00:07:45.598 11.126 - 11.175: 5.1247% ( 83) 00:07:45.598 11.175 - 11.225: 8.0022% ( 105) 00:07:45.598 11.225 - 11.274: 11.9485% ( 144) 00:07:45.598 11.274 - 11.323: 17.1280% ( 189) 00:07:45.598 11.323 - 11.372: 22.6089% ( 200) 00:07:45.598 11.372 - 11.422: 29.5149% ( 252) 00:07:45.598 11.422 - 11.471: 36.3661% ( 250) 00:07:45.598 11.471 - 11.520: 43.4640% ( 259) 00:07:45.598 11.520 - 11.569: 50.0959% ( 242) 00:07:45.598 11.569 - 11.618: 55.0562% ( 181) 00:07:45.598 11.618 - 11.668: 59.4958% ( 162) 00:07:45.598 11.668 - 11.717: 63.7161% ( 154) 00:07:45.598 11.717 - 11.766: 67.0595% ( 122) 00:07:45.598 11.766 - 11.815: 69.4985% ( 89) 00:07:45.598 11.815 - 11.865: 71.6361% ( 78) 00:07:45.598 11.865 - 11.914: 73.4722% ( 67) 00:07:45.598 11.914 - 11.963: 75.2535% ( 65) 00:07:45.598 11.963 - 12.012: 76.5415% ( 47) 00:07:45.598 12.012 - 12.062: 78.1036% ( 57) 00:07:45.598 12.062 - 12.111: 79.4738% ( 50) 00:07:45.598 12.111 - 12.160: 80.5700% ( 40) 00:07:45.598 12.160 - 12.209: 81.5018% ( 34) 00:07:45.598 12.209 - 12.258: 82.0225% ( 19) 00:07:45.598 12.258 - 12.308: 82.7076% ( 25) 00:07:45.598 12.308 - 12.357: 83.1461% ( 16) 00:07:45.598 12.357 - 12.406: 83.5845% ( 16) 00:07:45.598 12.406 - 12.455: 83.9408% ( 13) 00:07:45.598 12.455 - 12.505: 84.1326% ( 7) 00:07:45.598 12.505 - 12.554: 84.4889% ( 13) 00:07:45.598 12.554 - 12.603: 84.8452% ( 13) 00:07:45.598 12.603 - 12.702: 85.1740% ( 12) 00:07:45.598 12.702 - 12.800: 85.5851% ( 15) 00:07:45.598 12.800 - 12.898: 85.8317% ( 9) 00:07:45.598 12.898 - 12.997: 86.2702% ( 16) 00:07:45.598 12.997 - 13.095: 86.4620% ( 7) 00:07:45.598 13.095 - 13.194: 86.7087% ( 9) 00:07:45.598 13.194 - 13.292: 87.0101% ( 11) 00:07:45.598 13.292 - 13.391: 87.2842% ( 10) 00:07:45.598 13.391 - 13.489: 87.5034% ( 8) 00:07:45.598 13.489 - 13.588: 87.5856% ( 3) 00:07:45.598 13.588 - 13.686: 87.6679% ( 3) 00:07:45.598 13.686 - 13.785: 87.9693% ( 11) 00:07:45.598 13.785 - 13.883: 88.0241% ( 2) 00:07:45.598 13.883 - 13.982: 88.2982% ( 10) 00:07:45.598 13.982 - 14.080: 88.4900% ( 7) 00:07:45.598 14.080 - 14.178: 88.6818% ( 7) 00:07:45.598 14.178 - 14.277: 88.8463% ( 6) 00:07:45.598 14.277 - 14.375: 89.0107% ( 6) 00:07:45.598 14.375 - 14.474: 89.1203% ( 4) 00:07:45.598 14.474 - 14.572: 89.5862% ( 17) 00:07:45.598 14.572 - 14.671: 89.7506% ( 6) 00:07:45.598 14.671 - 14.769: 89.9973% ( 9) 00:07:45.598 14.769 - 14.868: 90.2165% ( 8) 00:07:45.598 14.868 - 14.966: 90.4905% ( 10) 00:07:45.598 14.966 - 15.065: 90.6824% ( 7) 00:07:45.598 15.065 - 15.163: 90.8742% ( 7) 00:07:45.598 15.163 - 15.262: 91.1757% ( 11) 00:07:45.598 15.262 - 15.360: 91.6141% ( 16) 00:07:45.598 15.360 - 15.458: 91.8334% ( 8) 00:07:45.598 15.458 - 15.557: 92.0526% ( 8) 00:07:45.598 15.557 - 15.655: 92.2993% ( 9) 00:07:45.598 15.655 - 15.754: 92.4637% ( 6) 00:07:45.598 15.754 - 15.852: 92.6281% ( 6) 00:07:45.598 15.852 - 15.951: 92.9022% ( 10) 00:07:45.598 15.951 - 16.049: 93.1488% ( 9) 00:07:45.598 16.049 - 16.148: 93.4229% ( 10) 00:07:45.598 16.148 - 16.246: 93.7243% ( 11) 00:07:45.598 16.246 - 16.345: 93.8065% ( 3) 00:07:45.598 16.345 - 16.443: 93.9435% ( 5) 00:07:45.598 16.443 - 16.542: 94.1354% ( 7) 00:07:45.598 16.542 - 16.640: 94.4642% ( 12) 00:07:45.598 16.640 - 16.738: 94.6835% ( 8) 00:07:45.598 16.738 - 16.837: 94.7657% ( 3) 00:07:45.598 16.837 - 16.935: 94.9301% ( 6) 00:07:45.598 16.935 - 17.034: 95.0945% ( 6) 00:07:45.598 17.034 - 17.132: 95.3412% ( 9) 00:07:45.598 17.132 - 17.231: 95.4782% ( 5) 00:07:45.598 17.231 - 17.329: 95.5878% ( 4) 00:07:45.598 17.329 - 17.428: 95.7523% ( 6) 00:07:45.598 17.428 - 17.526: 95.8345% ( 3) 00:07:45.598 17.526 - 17.625: 95.9989% ( 6) 00:07:45.598 17.625 - 17.723: 96.1359% ( 5) 00:07:45.598 17.723 - 17.822: 96.3278% ( 7) 00:07:45.598 17.822 - 17.920: 96.4648% ( 5) 00:07:45.598 17.920 - 18.018: 96.6018% ( 5) 00:07:45.598 18.018 - 18.117: 96.7936% ( 7) 00:07:45.598 18.117 - 18.215: 96.9033% ( 4) 00:07:45.598 18.314 - 18.412: 96.9855% ( 3) 00:07:45.598 18.412 - 18.511: 97.1499% ( 6) 00:07:45.598 18.511 - 18.609: 97.2047% ( 2) 00:07:45.598 18.609 - 18.708: 97.3143% ( 4) 00:07:45.598 18.708 - 18.806: 97.4240% ( 4) 00:07:45.598 18.806 - 18.905: 97.5610% ( 5) 00:07:45.598 18.905 - 19.003: 97.6158% ( 2) 00:07:45.598 19.003 - 19.102: 97.6980% ( 3) 00:07:45.598 19.102 - 19.200: 97.8076% ( 4) 00:07:45.598 19.200 - 19.298: 97.8898% ( 3) 00:07:45.598 19.397 - 19.495: 97.9446% ( 2) 00:07:45.598 19.495 - 19.594: 98.0543% ( 4) 00:07:45.598 19.594 - 19.692: 98.1913% ( 5) 00:07:45.598 19.692 - 19.791: 98.2735% ( 3) 00:07:45.598 19.889 - 19.988: 98.3831% ( 4) 00:07:45.598 19.988 - 20.086: 98.4379% ( 2) 00:07:45.598 20.086 - 20.185: 98.5201% ( 3) 00:07:45.598 20.185 - 20.283: 98.6024% ( 3) 00:07:45.598 20.382 - 20.480: 98.6846% ( 3) 00:07:45.598 20.578 - 20.677: 98.7120% ( 1) 00:07:45.598 20.677 - 20.775: 98.7394% ( 1) 00:07:45.598 20.972 - 21.071: 98.7668% ( 1) 00:07:45.598 21.071 - 21.169: 98.7942% ( 1) 00:07:45.598 21.366 - 21.465: 98.8216% ( 1) 00:07:45.598 21.465 - 21.563: 98.8490% ( 1) 00:07:45.598 21.563 - 21.662: 98.8764% ( 1) 00:07:45.598 21.858 - 21.957: 98.9038% ( 1) 00:07:45.598 22.252 - 22.351: 98.9312% ( 1) 00:07:45.598 22.351 - 22.449: 98.9586% ( 1) 00:07:45.598 22.548 - 22.646: 98.9860% ( 1) 00:07:45.598 22.646 - 22.745: 99.0408% ( 2) 00:07:45.598 22.942 - 23.040: 99.1230% ( 3) 00:07:45.598 23.434 - 23.532: 99.1505% ( 1) 00:07:45.598 24.320 - 24.418: 99.1779% ( 1) 00:07:45.598 24.418 - 24.517: 99.2327% ( 2) 00:07:45.598 25.600 - 25.797: 99.2875% ( 2) 00:07:45.598 25.994 - 26.191: 99.3149% ( 1) 00:07:45.598 26.782 - 26.978: 99.3423% ( 1) 00:07:45.598 27.569 - 27.766: 99.3697% ( 1) 00:07:45.598 27.766 - 27.963: 99.4245% ( 2) 00:07:45.598 28.160 - 28.357: 99.4793% ( 2) 00:07:45.598 28.948 - 29.145: 99.5067% ( 1) 00:07:45.598 30.129 - 30.326: 99.5341% ( 1) 00:07:45.598 30.917 - 31.114: 99.5615% ( 1) 00:07:45.598 34.462 - 34.658: 99.5889% ( 1) 00:07:45.598 43.914 - 44.111: 99.6163% ( 1) 00:07:45.598 45.292 - 45.489: 99.6437% ( 1) 00:07:45.598 47.655 - 47.852: 99.6711% ( 1) 00:07:45.598 50.412 - 50.806: 99.7260% ( 2) 00:07:45.598 51.200 - 51.594: 99.7534% ( 1) 00:07:45.598 52.382 - 52.775: 99.7808% ( 1) 00:07:45.598 57.502 - 57.895: 99.8082% ( 1) 00:07:45.598 63.803 - 64.197: 99.8356% ( 1) 00:07:45.598 64.591 - 64.985: 99.8630% ( 1) 00:07:45.598 66.560 - 66.954: 99.8904% ( 1) 00:07:45.598 70.498 - 70.892: 99.9178% ( 1) 00:07:45.598 72.862 - 73.255: 99.9452% ( 1) 00:07:45.598 106.338 - 107.126: 99.9726% ( 1) 00:07:45.598 126.031 - 126.818: 100.0000% ( 1) 00:07:45.598 00:07:45.598 Complete histogram 00:07:45.598 ================== 00:07:45.598 Range in us Cumulative Count 00:07:45.598 7.385 - 7.434: 0.0822% ( 3) 00:07:45.598 7.434 - 7.483: 0.8770% ( 29) 00:07:45.598 7.483 - 7.532: 3.0419% ( 79) 00:07:45.598 7.532 - 7.582: 8.7147% ( 207) 00:07:45.598 7.582 - 7.631: 17.8953% ( 335) 00:07:45.598 7.631 - 7.680: 31.4059% ( 493) 00:07:45.598 7.680 - 7.729: 44.2861% ( 470) 00:07:45.599 7.729 - 7.778: 55.1932% ( 398) 00:07:45.599 7.778 - 7.828: 61.8526% ( 243) 00:07:45.599 7.828 - 7.877: 65.7988% ( 144) 00:07:45.599 7.877 - 7.926: 68.7038% ( 106) 00:07:45.599 7.926 - 7.975: 70.3754% ( 61) 00:07:45.599 7.975 - 8.025: 71.3072% ( 34) 00:07:45.599 8.025 - 8.074: 72.2938% ( 36) 00:07:45.599 8.074 - 8.123: 73.3900% ( 40) 00:07:45.599 8.123 - 8.172: 74.6232% ( 45) 00:07:45.599 8.172 - 8.222: 76.7882% ( 79) 00:07:45.599 8.222 - 8.271: 79.8575% ( 112) 00:07:45.599 8.271 - 8.320: 82.8994% ( 111) 00:07:45.599 8.320 - 8.369: 86.3798% ( 127) 00:07:45.599 8.369 - 8.418: 89.1477% ( 101) 00:07:45.599 8.418 - 8.468: 91.5593% ( 88) 00:07:45.599 8.468 - 8.517: 92.9570% ( 51) 00:07:45.599 8.517 - 8.566: 94.0532% ( 40) 00:07:45.599 8.566 - 8.615: 94.6835% ( 23) 00:07:45.599 8.615 - 8.665: 95.1768% ( 18) 00:07:45.599 8.665 - 8.714: 95.6152% ( 16) 00:07:45.599 8.714 - 8.763: 95.8893% ( 10) 00:07:45.599 8.763 - 8.812: 95.9989% ( 4) 00:07:45.599 8.812 - 8.862: 96.2181% ( 8) 00:07:45.599 8.862 - 8.911: 96.3826% ( 6) 00:07:45.599 8.911 - 8.960: 96.6018% ( 8) 00:07:45.599 8.960 - 9.009: 96.7388% ( 5) 00:07:45.599 9.058 - 9.108: 96.7936% ( 2) 00:07:45.599 9.108 - 9.157: 96.8485% ( 2) 00:07:45.599 9.206 - 9.255: 96.9307% ( 3) 00:07:45.599 9.305 - 9.354: 96.9581% ( 1) 00:07:45.599 9.551 - 9.600: 96.9855% ( 1) 00:07:45.599 9.649 - 9.698: 97.0129% ( 1) 00:07:45.599 9.698 - 9.748: 97.0403% ( 1) 00:07:45.599 9.748 - 9.797: 97.0677% ( 1) 00:07:45.599 9.846 - 9.895: 97.0951% ( 1) 00:07:45.599 9.945 - 9.994: 97.1225% ( 1) 00:07:45.599 10.092 - 10.142: 97.1499% ( 1) 00:07:45.599 10.142 - 10.191: 97.1773% ( 1) 00:07:45.599 10.338 - 10.388: 97.2047% ( 1) 00:07:45.599 10.388 - 10.437: 97.2321% ( 1) 00:07:45.599 10.437 - 10.486: 97.2595% ( 1) 00:07:45.599 10.585 - 10.634: 97.3417% ( 3) 00:07:45.599 10.634 - 10.683: 97.3691% ( 1) 00:07:45.599 10.683 - 10.732: 97.4240% ( 2) 00:07:45.599 10.732 - 10.782: 97.4788% ( 2) 00:07:45.599 10.782 - 10.831: 97.5062% ( 1) 00:07:45.599 10.831 - 10.880: 97.5336% ( 1) 00:07:45.599 10.880 - 10.929: 97.5610% ( 1) 00:07:45.599 10.978 - 11.028: 97.6706% ( 4) 00:07:45.599 11.028 - 11.077: 97.6980% ( 1) 00:07:45.599 11.077 - 11.126: 97.7528% ( 2) 00:07:45.599 11.126 - 11.175: 97.7802% ( 1) 00:07:45.599 11.225 - 11.274: 97.8076% ( 1) 00:07:45.599 11.274 - 11.323: 97.8350% ( 1) 00:07:45.599 11.372 - 11.422: 97.8624% ( 1) 00:07:45.599 11.422 - 11.471: 97.8898% ( 1) 00:07:45.599 11.618 - 11.668: 97.9172% ( 1) 00:07:45.599 11.717 - 11.766: 97.9446% ( 1) 00:07:45.599 11.815 - 11.865: 97.9720% ( 1) 00:07:45.599 11.963 - 12.012: 97.9995% ( 1) 00:07:45.599 12.012 - 12.062: 98.0269% ( 1) 00:07:45.599 12.997 - 13.095: 98.0817% ( 2) 00:07:45.599 13.095 - 13.194: 98.1639% ( 3) 00:07:45.599 13.194 - 13.292: 98.1913% ( 1) 00:07:45.599 13.292 - 13.391: 98.2461% ( 2) 00:07:45.599 13.391 - 13.489: 98.3557% ( 4) 00:07:45.599 13.489 - 13.588: 98.4653% ( 4) 00:07:45.599 13.588 - 13.686: 98.4927% ( 1) 00:07:45.599 13.686 - 13.785: 98.5201% ( 1) 00:07:45.599 13.785 - 13.883: 98.5750% ( 2) 00:07:45.599 13.883 - 13.982: 98.6298% ( 2) 00:07:45.599 13.982 - 14.080: 98.7120% ( 3) 00:07:45.599 14.080 - 14.178: 98.7394% ( 1) 00:07:45.599 14.178 - 14.277: 98.7942% ( 2) 00:07:45.599 14.277 - 14.375: 98.8764% ( 3) 00:07:45.599 14.375 - 14.474: 98.9860% ( 4) 00:07:45.599 14.474 - 14.572: 99.0134% ( 1) 00:07:45.599 14.671 - 14.769: 99.0682% ( 2) 00:07:45.599 14.769 - 14.868: 99.0956% ( 1) 00:07:45.599 17.329 - 17.428: 99.1505% ( 2) 00:07:45.599 18.511 - 18.609: 99.1779% ( 1) 00:07:45.599 18.708 - 18.806: 99.2053% ( 1) 00:07:45.599 19.102 - 19.200: 99.2327% ( 1) 00:07:45.599 19.298 - 19.397: 99.2601% ( 1) 00:07:45.599 19.692 - 19.791: 99.2875% ( 1) 00:07:45.599 19.988 - 20.086: 99.3149% ( 1) 00:07:45.599 20.086 - 20.185: 99.3423% ( 1) 00:07:45.599 20.283 - 20.382: 99.3697% ( 1) 00:07:45.599 20.382 - 20.480: 99.3971% ( 1) 00:07:45.599 21.169 - 21.268: 99.4245% ( 1) 00:07:45.599 21.268 - 21.366: 99.4793% ( 2) 00:07:45.599 21.366 - 21.465: 99.5067% ( 1) 00:07:45.599 21.662 - 21.760: 99.5341% ( 1) 00:07:45.599 21.957 - 22.055: 99.5615% ( 1) 00:07:45.599 22.055 - 22.154: 99.5889% ( 1) 00:07:45.599 22.154 - 22.252: 99.6163% ( 1) 00:07:45.599 22.449 - 22.548: 99.6437% ( 1) 00:07:45.599 23.434 - 23.532: 99.6711% ( 1) 00:07:45.599 23.532 - 23.631: 99.6985% ( 1) 00:07:45.599 24.517 - 24.615: 99.7260% ( 1) 00:07:45.599 25.403 - 25.600: 99.7534% ( 1) 00:07:45.599 25.600 - 25.797: 99.7808% ( 1) 00:07:45.599 34.855 - 35.052: 99.8082% ( 1) 00:07:45.599 38.400 - 38.597: 99.8356% ( 1) 00:07:45.599 41.354 - 41.551: 99.8630% ( 1) 00:07:45.599 44.898 - 45.095: 99.8904% ( 1) 00:07:45.599 46.671 - 46.868: 99.9178% ( 1) 00:07:45.599 51.988 - 52.382: 99.9452% ( 1) 00:07:45.599 58.289 - 58.683: 99.9726% ( 1) 00:07:45.599 281.994 - 283.569: 100.0000% ( 1) 00:07:45.599 00:07:45.599 ************************************ 00:07:45.599 END TEST nvme_overhead 00:07:45.599 ************************************ 00:07:45.599 00:07:45.599 real 0m1.238s 00:07:45.599 user 0m1.079s 00:07:45.599 sys 0m0.100s 00:07:45.599 10:06:51 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.599 10:06:51 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:45.599 10:06:51 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:45.599 10:06:51 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:45.599 10:06:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.599 10:06:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.599 ************************************ 00:07:45.599 START TEST nvme_arbitration 00:07:45.599 ************************************ 00:07:45.599 10:06:51 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:48.879 Initializing NVMe Controllers 00:07:48.879 Attached to 0000:00:10.0 00:07:48.879 Attached to 0000:00:11.0 00:07:48.879 Attached to 0000:00:13.0 00:07:48.879 Attached to 0000:00:12.0 00:07:48.879 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:48.879 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:48.879 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:48.879 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:48.879 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:48.879 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:48.879 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:48.879 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:48.879 Initialization complete. Launching workers. 00:07:48.879 Starting thread on core 1 with urgent priority queue 00:07:48.879 Starting thread on core 2 with urgent priority queue 00:07:48.879 Starting thread on core 3 with urgent priority queue 00:07:48.879 Starting thread on core 0 with urgent priority queue 00:07:48.879 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.879 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.879 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:48.879 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:07:48.879 QEMU NVMe Ctrl (12343 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.879 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:07:48.879 ======================================================== 00:07:48.879 00:07:48.879 00:07:48.879 real 0m3.306s 00:07:48.879 user 0m9.242s 00:07:48.879 sys 0m0.104s 00:07:48.879 10:06:54 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.879 ************************************ 00:07:48.879 END TEST nvme_arbitration 00:07:48.879 ************************************ 00:07:48.879 10:06:54 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:48.879 10:06:55 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:48.879 10:06:55 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.879 10:06:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.879 10:06:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.136 ************************************ 00:07:49.136 START TEST nvme_single_aen 00:07:49.136 ************************************ 00:07:49.136 10:06:55 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:49.136 Asynchronous Event Request test 00:07:49.136 Attached to 0000:00:10.0 00:07:49.136 Attached to 0000:00:11.0 00:07:49.136 Attached to 0000:00:13.0 00:07:49.136 Attached to 0000:00:12.0 00:07:49.136 Reset controller to setup AER completions for this process 00:07:49.136 Registering asynchronous event callbacks... 00:07:49.136 Getting orig temperature thresholds of all controllers 00:07:49.136 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:49.136 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:49.136 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:49.136 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:49.136 Setting all controllers temperature threshold low to trigger AER 00:07:49.136 Waiting for all controllers temperature threshold to be set lower 00:07:49.136 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:49.136 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:49.136 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:49.136 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:49.136 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:49.136 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:49.136 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:49.136 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:49.136 Waiting for all controllers to trigger AER and reset threshold 00:07:49.136 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.136 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.136 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.136 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.136 Cleaning up... 00:07:49.136 00:07:49.136 real 0m0.244s 00:07:49.136 user 0m0.095s 00:07:49.136 sys 0m0.100s 00:07:49.136 10:06:55 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.136 ************************************ 00:07:49.136 END TEST nvme_single_aen 00:07:49.136 ************************************ 00:07:49.136 10:06:55 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:49.392 10:06:55 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:49.392 10:06:55 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.392 10:06:55 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.392 10:06:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.392 ************************************ 00:07:49.392 START TEST nvme_doorbell_aers 00:07:49.392 ************************************ 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:49.392 10:06:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:49.648 [2024-12-06 10:06:55.625533] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:07:59.636 Executing: test_write_invalid_db 00:07:59.636 Waiting for AER completion... 00:07:59.636 Failure: test_write_invalid_db 00:07:59.636 00:07:59.636 Executing: test_invalid_db_write_overflow_sq 00:07:59.636 Waiting for AER completion... 00:07:59.636 Failure: test_invalid_db_write_overflow_sq 00:07:59.636 00:07:59.636 Executing: test_invalid_db_write_overflow_cq 00:07:59.636 Waiting for AER completion... 00:07:59.636 Failure: test_invalid_db_write_overflow_cq 00:07:59.636 00:07:59.636 10:07:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:59.636 10:07:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:59.636 [2024-12-06 10:07:05.661956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:09.680 Executing: test_write_invalid_db 00:08:09.680 Waiting for AER completion... 00:08:09.680 Failure: test_write_invalid_db 00:08:09.680 00:08:09.680 Executing: test_invalid_db_write_overflow_sq 00:08:09.680 Waiting for AER completion... 00:08:09.680 Failure: test_invalid_db_write_overflow_sq 00:08:09.680 00:08:09.680 Executing: test_invalid_db_write_overflow_cq 00:08:09.680 Waiting for AER completion... 00:08:09.680 Failure: test_invalid_db_write_overflow_cq 00:08:09.680 00:08:09.680 10:07:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:09.680 10:07:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:09.680 [2024-12-06 10:07:15.733207] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:19.643 Executing: test_write_invalid_db 00:08:19.643 Waiting for AER completion... 00:08:19.643 Failure: test_write_invalid_db 00:08:19.643 00:08:19.643 Executing: test_invalid_db_write_overflow_sq 00:08:19.643 Waiting for AER completion... 00:08:19.643 Failure: test_invalid_db_write_overflow_sq 00:08:19.643 00:08:19.643 Executing: test_invalid_db_write_overflow_cq 00:08:19.643 Waiting for AER completion... 00:08:19.643 Failure: test_invalid_db_write_overflow_cq 00:08:19.643 00:08:19.643 10:07:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:19.643 10:07:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:19.643 [2024-12-06 10:07:25.738727] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.608 Executing: test_write_invalid_db 00:08:29.608 Waiting for AER completion... 00:08:29.608 Failure: test_write_invalid_db 00:08:29.608 00:08:29.608 Executing: test_invalid_db_write_overflow_sq 00:08:29.608 Waiting for AER completion... 00:08:29.608 Failure: test_invalid_db_write_overflow_sq 00:08:29.608 00:08:29.608 Executing: test_invalid_db_write_overflow_cq 00:08:29.608 Waiting for AER completion... 00:08:29.608 Failure: test_invalid_db_write_overflow_cq 00:08:29.608 00:08:29.608 00:08:29.608 real 0m40.203s 00:08:29.608 user 0m34.147s 00:08:29.608 sys 0m5.627s 00:08:29.608 ************************************ 00:08:29.608 END TEST nvme_doorbell_aers 00:08:29.608 ************************************ 00:08:29.608 10:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.608 10:07:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:29.608 10:07:35 nvme -- nvme/nvme.sh@97 -- # uname 00:08:29.608 10:07:35 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:29.608 10:07:35 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:29.608 10:07:35 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:29.608 10:07:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.608 10:07:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.608 ************************************ 00:08:29.608 START TEST nvme_multi_aen 00:08:29.608 ************************************ 00:08:29.608 10:07:35 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:29.866 [2024-12-06 10:07:35.780985] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.781077] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.781094] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.783562] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.783657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.783690] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.786066] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.786351] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.786617] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.789333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.789735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 [2024-12-06 10:07:35.789946] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63430) is not found. Dropping the request. 00:08:29.866 Child process pid: 63951 00:08:30.126 [Child] Asynchronous Event Request test 00:08:30.126 [Child] Attached to 0000:00:10.0 00:08:30.126 [Child] Attached to 0000:00:11.0 00:08:30.126 [Child] Attached to 0000:00:13.0 00:08:30.126 [Child] Attached to 0000:00:12.0 00:08:30.126 [Child] Registering asynchronous event callbacks... 00:08:30.126 [Child] Getting orig temperature thresholds of all controllers 00:08:30.126 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:30.126 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 [Child] Cleaning up... 00:08:30.126 Asynchronous Event Request test 00:08:30.126 Attached to 0000:00:10.0 00:08:30.126 Attached to 0000:00:11.0 00:08:30.126 Attached to 0000:00:13.0 00:08:30.126 Attached to 0000:00:12.0 00:08:30.126 Reset controller to setup AER completions for this process 00:08:30.126 Registering asynchronous event callbacks... 00:08:30.126 Getting orig temperature thresholds of all controllers 00:08:30.126 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.126 Setting all controllers temperature threshold low to trigger AER 00:08:30.126 Waiting for all controllers temperature threshold to be set lower 00:08:30.126 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:30.126 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:30.126 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:30.126 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.126 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:30.126 Waiting for all controllers to trigger AER and reset threshold 00:08:30.126 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.126 Cleaning up... 00:08:30.126 00:08:30.126 real 0m0.513s 00:08:30.126 user 0m0.191s 00:08:30.126 sys 0m0.205s 00:08:30.126 10:07:36 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.126 10:07:36 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:30.126 ************************************ 00:08:30.126 END TEST nvme_multi_aen 00:08:30.126 ************************************ 00:08:30.126 10:07:36 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:30.126 10:07:36 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:30.126 10:07:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.126 10:07:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.126 ************************************ 00:08:30.126 START TEST nvme_startup 00:08:30.126 ************************************ 00:08:30.126 10:07:36 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:30.383 Initializing NVMe Controllers 00:08:30.383 Attached to 0000:00:10.0 00:08:30.383 Attached to 0000:00:11.0 00:08:30.383 Attached to 0000:00:13.0 00:08:30.383 Attached to 0000:00:12.0 00:08:30.383 Initialization complete. 00:08:30.383 Time used:143840.984 (us). 00:08:30.383 ************************************ 00:08:30.383 END TEST nvme_startup 00:08:30.383 ************************************ 00:08:30.383 00:08:30.383 real 0m0.206s 00:08:30.383 user 0m0.069s 00:08:30.383 sys 0m0.095s 00:08:30.384 10:07:36 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.384 10:07:36 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:30.384 10:07:36 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:30.384 10:07:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.384 10:07:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.384 10:07:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.384 ************************************ 00:08:30.384 START TEST nvme_multi_secondary 00:08:30.384 ************************************ 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64007 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64008 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:30.384 10:07:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:33.674 Initializing NVMe Controllers 00:08:33.674 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.674 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:33.674 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:33.674 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:33.674 Initialization complete. Launching workers. 00:08:33.674 ======================================================== 00:08:33.674 Latency(us) 00:08:33.674 Device Information : IOPS MiB/s Average min max 00:08:33.674 PCIE (0000:00:10.0) NSID 1 from core 1: 7298.44 28.51 2190.76 715.41 6104.11 00:08:33.674 PCIE (0000:00:11.0) NSID 1 from core 1: 7298.44 28.51 2191.84 734.68 5900.67 00:08:33.674 PCIE (0000:00:13.0) NSID 1 from core 1: 7298.44 28.51 2191.97 727.65 6129.02 00:08:33.674 PCIE (0000:00:12.0) NSID 1 from core 1: 7298.44 28.51 2192.05 739.60 6451.78 00:08:33.674 PCIE (0000:00:12.0) NSID 2 from core 1: 7298.44 28.51 2192.11 735.55 6899.23 00:08:33.674 PCIE (0000:00:12.0) NSID 3 from core 1: 7298.44 28.51 2192.24 729.83 7319.67 00:08:33.674 ======================================================== 00:08:33.674 Total : 43790.62 171.06 2191.83 715.41 7319.67 00:08:33.674 00:08:33.674 Initializing NVMe Controllers 00:08:33.674 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.674 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.674 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:33.674 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:33.674 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:33.674 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:33.674 Initialization complete. Launching workers. 00:08:33.674 ======================================================== 00:08:33.674 Latency(us) 00:08:33.674 Device Information : IOPS MiB/s Average min max 00:08:33.674 PCIE (0000:00:10.0) NSID 1 from core 2: 2996.40 11.70 5338.36 959.07 13196.89 00:08:33.674 PCIE (0000:00:11.0) NSID 1 from core 2: 2996.40 11.70 5338.95 1065.16 13418.35 00:08:33.674 PCIE (0000:00:13.0) NSID 1 from core 2: 2996.40 11.70 5339.58 1065.35 13981.85 00:08:33.674 PCIE (0000:00:12.0) NSID 1 from core 2: 2996.40 11.70 5339.55 971.68 12735.49 00:08:33.674 PCIE (0000:00:12.0) NSID 2 from core 2: 2996.40 11.70 5339.45 1082.52 13981.32 00:08:33.674 PCIE (0000:00:12.0) NSID 3 from core 2: 2996.40 11.70 5339.44 1075.33 13034.48 00:08:33.674 ======================================================== 00:08:33.674 Total : 17978.41 70.23 5339.22 959.07 13981.85 00:08:33.674 00:08:33.931 10:07:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64007 00:08:35.878 Initializing NVMe Controllers 00:08:35.878 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:35.878 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:35.878 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:35.878 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:35.878 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:35.878 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:35.878 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:35.878 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:35.878 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:35.878 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:35.878 Initialization complete. Launching workers. 00:08:35.878 ======================================================== 00:08:35.878 Latency(us) 00:08:35.878 Device Information : IOPS MiB/s Average min max 00:08:35.878 PCIE (0000:00:10.0) NSID 1 from core 0: 10555.48 41.23 1514.51 716.50 11827.15 00:08:35.878 PCIE (0000:00:11.0) NSID 1 from core 0: 10555.48 41.23 1515.38 709.67 11916.96 00:08:35.878 PCIE (0000:00:13.0) NSID 1 from core 0: 10555.48 41.23 1515.36 683.27 9265.50 00:08:35.878 PCIE (0000:00:12.0) NSID 1 from core 0: 10555.48 41.23 1515.33 656.36 10090.00 00:08:35.878 PCIE (0000:00:12.0) NSID 2 from core 0: 10555.48 41.23 1515.31 646.54 11130.88 00:08:35.878 PCIE (0000:00:12.0) NSID 3 from core 0: 10555.48 41.23 1515.28 603.79 11545.83 00:08:35.878 ======================================================== 00:08:35.878 Total : 63332.91 247.39 1515.20 603.79 11916.96 00:08:35.878 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64008 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64077 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64078 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:35.878 10:07:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:39.157 Initializing NVMe Controllers 00:08:39.157 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.157 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:39.157 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:39.157 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:39.157 Initialization complete. Launching workers. 00:08:39.157 ======================================================== 00:08:39.157 Latency(us) 00:08:39.157 Device Information : IOPS MiB/s Average min max 00:08:39.157 PCIE (0000:00:10.0) NSID 1 from core 1: 7487.26 29.25 2135.50 739.23 6664.68 00:08:39.157 PCIE (0000:00:11.0) NSID 1 from core 1: 7487.26 29.25 2136.52 770.97 6277.09 00:08:39.157 PCIE (0000:00:13.0) NSID 1 from core 1: 7487.26 29.25 2136.58 744.12 5954.70 00:08:39.157 PCIE (0000:00:12.0) NSID 1 from core 1: 7487.26 29.25 2136.57 747.99 6795.39 00:08:39.157 PCIE (0000:00:12.0) NSID 2 from core 1: 7487.26 29.25 2136.51 760.39 5861.22 00:08:39.157 PCIE (0000:00:12.0) NSID 3 from core 1: 7487.26 29.25 2136.47 764.79 6281.35 00:08:39.157 ======================================================== 00:08:39.157 Total : 44923.54 175.48 2136.36 739.23 6795.39 00:08:39.157 00:08:39.157 Initializing NVMe Controllers 00:08:39.157 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.157 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.157 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:39.157 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:39.157 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:39.157 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:39.157 Initialization complete. Launching workers. 00:08:39.157 ======================================================== 00:08:39.157 Latency(us) 00:08:39.157 Device Information : IOPS MiB/s Average min max 00:08:39.157 PCIE (0000:00:10.0) NSID 1 from core 0: 7401.72 28.91 2160.17 753.83 7703.39 00:08:39.157 PCIE (0000:00:11.0) NSID 1 from core 0: 7401.72 28.91 2161.16 768.77 6363.14 00:08:39.157 PCIE (0000:00:13.0) NSID 1 from core 0: 7401.72 28.91 2161.09 749.57 6399.49 00:08:39.157 PCIE (0000:00:12.0) NSID 1 from core 0: 7401.72 28.91 2161.04 703.37 6545.69 00:08:39.157 PCIE (0000:00:12.0) NSID 2 from core 0: 7401.72 28.91 2160.98 666.70 7358.68 00:08:39.157 PCIE (0000:00:12.0) NSID 3 from core 0: 7401.72 28.91 2160.92 630.83 7698.69 00:08:39.157 ======================================================== 00:08:39.157 Total : 44410.32 173.48 2160.89 630.83 7703.39 00:08:39.157 00:08:41.054 Initializing NVMe Controllers 00:08:41.054 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:41.054 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:41.054 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:41.054 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:41.054 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:41.054 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:41.054 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:41.054 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:41.054 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:41.054 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:41.054 Initialization complete. Launching workers. 00:08:41.054 ======================================================== 00:08:41.054 Latency(us) 00:08:41.054 Device Information : IOPS MiB/s Average min max 00:08:41.054 PCIE (0000:00:10.0) NSID 1 from core 2: 4391.14 17.15 3641.16 790.85 12610.31 00:08:41.054 PCIE (0000:00:11.0) NSID 1 from core 2: 4391.14 17.15 3642.58 792.07 13071.43 00:08:41.054 PCIE (0000:00:13.0) NSID 1 from core 2: 4391.14 17.15 3642.53 809.54 16988.52 00:08:41.054 PCIE (0000:00:12.0) NSID 1 from core 2: 4391.14 17.15 3643.03 800.10 16621.27 00:08:41.054 PCIE (0000:00:12.0) NSID 2 from core 2: 4391.14 17.15 3643.00 799.01 12552.94 00:08:41.054 PCIE (0000:00:12.0) NSID 3 from core 2: 4391.14 17.15 3642.97 778.39 12634.57 00:08:41.054 ======================================================== 00:08:41.054 Total : 26346.83 102.92 3642.54 778.39 16988.52 00:08:41.054 00:08:41.054 ************************************ 00:08:41.054 END TEST nvme_multi_secondary 00:08:41.054 ************************************ 00:08:41.054 10:07:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64077 00:08:41.054 10:07:46 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64078 00:08:41.054 00:08:41.054 real 0m10.616s 00:08:41.054 user 0m18.427s 00:08:41.054 sys 0m0.634s 00:08:41.054 10:07:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.054 10:07:46 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:41.054 10:07:47 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:41.054 10:07:47 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:41.054 10:07:47 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63034 ]] 00:08:41.054 10:07:47 nvme -- common/autotest_common.sh@1094 -- # kill 63034 00:08:41.054 10:07:47 nvme -- common/autotest_common.sh@1095 -- # wait 63034 00:08:41.054 [2024-12-06 10:07:47.028737] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.028820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.028852] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.028874] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.031786] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.031932] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.031949] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.031961] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.033791] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.033829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.033842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.033856] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.035722] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.035767] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.035782] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 [2024-12-06 10:07:47.035796] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63950) is not found. Dropping the request. 00:08:41.054 10:07:47 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:41.054 10:07:47 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:41.054 10:07:47 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:41.055 10:07:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.055 10:07:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.055 10:07:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.055 ************************************ 00:08:41.055 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:41.055 ************************************ 00:08:41.055 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:41.312 * Looking for test storage... 00:08:41.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.312 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:41.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.313 --rc genhtml_branch_coverage=1 00:08:41.313 --rc genhtml_function_coverage=1 00:08:41.313 --rc genhtml_legend=1 00:08:41.313 --rc geninfo_all_blocks=1 00:08:41.313 --rc geninfo_unexecuted_blocks=1 00:08:41.313 00:08:41.313 ' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:41.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.313 --rc genhtml_branch_coverage=1 00:08:41.313 --rc genhtml_function_coverage=1 00:08:41.313 --rc genhtml_legend=1 00:08:41.313 --rc geninfo_all_blocks=1 00:08:41.313 --rc geninfo_unexecuted_blocks=1 00:08:41.313 00:08:41.313 ' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:41.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.313 --rc genhtml_branch_coverage=1 00:08:41.313 --rc genhtml_function_coverage=1 00:08:41.313 --rc genhtml_legend=1 00:08:41.313 --rc geninfo_all_blocks=1 00:08:41.313 --rc geninfo_unexecuted_blocks=1 00:08:41.313 00:08:41.313 ' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:41.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.313 --rc genhtml_branch_coverage=1 00:08:41.313 --rc genhtml_function_coverage=1 00:08:41.313 --rc genhtml_legend=1 00:08:41.313 --rc geninfo_all_blocks=1 00:08:41.313 --rc geninfo_unexecuted_blocks=1 00:08:41.313 00:08:41.313 ' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:41.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64240 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64240 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64240 ']' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.313 10:07:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.313 [2024-12-06 10:07:47.449887] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:08:41.313 [2024-12-06 10:07:47.450118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64240 ] 00:08:41.570 [2024-12-06 10:07:47.618425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.570 [2024-12-06 10:07:47.727006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.570 [2024-12-06 10:07:47.727358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.570 [2024-12-06 10:07:47.727582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.570 [2024-12-06 10:07:47.727810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.511 nvme0n1 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GjDpd.txt 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:42.511 true 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733479668 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64263 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:42.511 10:07:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:44.409 [2024-12-06 10:07:50.446495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:44.409 [2024-12-06 10:07:50.446739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:44.409 [2024-12-06 10:07:50.446764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:44.409 [2024-12-06 10:07:50.446777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:44.409 [2024-12-06 10:07:50.450113] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:44.409 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64263 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64263 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64263 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GjDpd.txt 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:44.409 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GjDpd.txt 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64240 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64240 ']' 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64240 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64240 00:08:44.410 killing process with pid 64240 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64240' 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64240 00:08:44.410 10:07:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64240 00:08:46.375 10:07:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:46.375 10:07:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:46.375 00:08:46.375 real 0m4.920s 00:08:46.375 user 0m17.489s 00:08:46.375 sys 0m0.520s 00:08:46.375 ************************************ 00:08:46.375 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:46.375 ************************************ 00:08:46.375 10:07:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.375 10:07:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.375 10:07:52 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:46.375 10:07:52 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:46.375 10:07:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.375 10:07:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.375 10:07:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.375 ************************************ 00:08:46.375 START TEST nvme_fio 00:08:46.375 ************************************ 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:46.375 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:46.375 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:46.640 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:46.640 10:07:52 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:46.640 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:46.640 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:46.640 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:46.640 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:46.641 10:07:52 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:46.905 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:46.905 fio-3.35 00:08:46.905 Starting 1 thread 00:08:53.460 00:08:53.460 test: (groupid=0, jobs=1): err= 0: pid=64402: Fri Dec 6 10:07:59 2024 00:08:53.460 read: IOPS=16.1k, BW=62.8MiB/s (65.8MB/s)(128MiB/2044msec) 00:08:53.460 slat (usec): min=4, max=310, avg= 5.52, stdev= 3.27 00:08:53.460 clat (usec): min=1325, max=262430, avg=3710.72, stdev=7966.89 00:08:53.460 lat (usec): min=1329, max=262445, avg=3716.24, stdev=7967.06 00:08:53.460 clat percentiles (msec): 00:08:53.460 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:08:53.460 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:08:53.460 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 7], 00:08:53.460 | 99.00th=[ 8], 99.50th=[ 8], 99.90th=[ 10], 99.95th=[ 257], 00:08:53.460 | 99.99th=[ 259] 00:08:53.460 bw ( KiB/s): min=37584, max=84088, per=100.00%, avg=65617.75, stdev=20312.82, samples=4 00:08:53.460 iops : min= 9396, max=21022, avg=16404.25, stdev=5078.07, samples=4 00:08:53.460 write: IOPS=16.1k, BW=62.9MiB/s (66.0MB/s)(129MiB/2044msec); 0 zone resets 00:08:53.460 slat (usec): min=4, max=107, avg= 5.75, stdev= 2.78 00:08:53.460 clat (usec): min=1298, max=262505, avg=4216.55, stdev=13809.08 00:08:53.460 lat (usec): min=1302, max=262522, avg=4222.31, stdev=13809.32 00:08:53.460 clat percentiles (msec): 00:08:53.460 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:08:53.460 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:08:53.460 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 7], 00:08:53.460 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 259], 99.95th=[ 262], 00:08:53.460 | 99.99th=[ 264] 00:08:53.460 bw ( KiB/s): min=37496, max=83784, per=100.00%, avg=65605.75, stdev=20247.52, samples=4 00:08:53.460 iops : min= 9374, max=20946, avg=16401.25, stdev=5061.75, samples=4 00:08:53.460 lat (msec) : 2=0.52%, 4=71.86%, 10=27.42%, 250=0.01%, 500=0.19% 00:08:53.460 cpu : usr=98.92%, sys=0.05%, ctx=2, majf=0, minf=609 00:08:53.460 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:53.460 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.460 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.460 issued rwts: total=32860,32918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.460 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.460 00:08:53.460 Run status group 0 (all jobs): 00:08:53.460 READ: bw=62.8MiB/s (65.8MB/s), 62.8MiB/s-62.8MiB/s (65.8MB/s-65.8MB/s), io=128MiB (135MB), run=2044-2044msec 00:08:53.460 WRITE: bw=62.9MiB/s (66.0MB/s), 62.9MiB/s-62.9MiB/s (66.0MB/s-66.0MB/s), io=129MiB (135MB), run=2044-2044msec 00:08:53.718 ----------------------------------------------------- 00:08:53.718 Suppressions used: 00:08:53.718 count bytes template 00:08:53.718 1 32 /usr/src/fio/parse.c 00:08:53.718 1 8 libtcmalloc_minimal.so 00:08:53.718 ----------------------------------------------------- 00:08:53.718 00:08:53.718 10:07:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:53.718 10:07:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:53.718 10:07:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:53.718 10:07:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:53.977 10:08:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:53.977 10:08:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:54.236 10:08:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:54.236 10:08:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:54.236 10:08:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:54.494 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:54.494 fio-3.35 00:08:54.494 Starting 1 thread 00:08:58.670 00:08:58.670 test: (groupid=0, jobs=1): err= 0: pid=64464: Fri Dec 6 10:08:04 2024 00:08:58.670 read: IOPS=16.1k, BW=62.7MiB/s (65.8MB/s)(127MiB/2022msec) 00:08:58.670 slat (nsec): min=3342, max=93471, avg=5252.77, stdev=2518.30 00:08:58.670 clat (usec): min=1011, max=82729, avg=3182.95, stdev=3284.91 00:08:58.670 lat (usec): min=1017, max=82733, avg=3188.20, stdev=3285.17 00:08:58.670 clat percentiles (usec): 00:08:58.670 | 1.00th=[ 1434], 5.00th=[ 2089], 10.00th=[ 2343], 20.00th=[ 2540], 00:08:58.670 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2704], 00:08:58.670 | 70.00th=[ 2868], 80.00th=[ 3359], 90.00th=[ 4359], 95.00th=[ 5473], 00:08:58.670 | 99.00th=[ 7701], 99.50th=[ 8717], 99.90th=[61080], 99.95th=[82314], 00:08:58.670 | 99.99th=[82314] 00:08:58.670 bw ( KiB/s): min=32816, max=90088, per=100.00%, avg=64862.00, stdev=25498.38, samples=4 00:08:58.670 iops : min= 8204, max=22522, avg=16215.50, stdev=6374.59, samples=4 00:08:58.670 write: IOPS=16.1k, BW=62.9MiB/s (65.9MB/s)(127MiB/2022msec); 0 zone resets 00:08:58.670 slat (nsec): min=3487, max=78370, avg=5585.90, stdev=2546.56 00:08:58.670 clat (usec): min=1054, max=87411, avg=4750.84, stdev=7271.98 00:08:58.670 lat (usec): min=1059, max=87416, avg=4756.43, stdev=7272.18 00:08:58.670 clat percentiles (usec): 00:08:58.670 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2442], 20.00th=[ 2540], 00:08:58.670 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:08:58.670 | 70.00th=[ 2999], 80.00th=[ 4113], 90.00th=[ 7242], 95.00th=[16909], 00:08:58.670 | 99.00th=[31327], 99.50th=[63701], 99.90th=[86508], 99.95th=[87557], 00:08:58.670 | 99.99th=[87557] 00:08:58.670 bw ( KiB/s): min=33360, max=89464, per=100.00%, avg=64898.00, stdev=25202.30, samples=4 00:08:58.670 iops : min= 8340, max=22366, avg=16224.50, stdev=6300.58, samples=4 00:08:58.670 lat (msec) : 2=3.12%, 4=79.48%, 10=12.83%, 20=2.77%, 50=1.41% 00:08:58.670 lat (msec) : 100=0.39% 00:08:58.670 cpu : usr=99.26%, sys=0.05%, ctx=4, majf=0, minf=608 00:08:58.670 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:58.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:58.670 issued rwts: total=32462,32545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.670 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:58.670 00:08:58.670 Run status group 0 (all jobs): 00:08:58.670 READ: bw=62.7MiB/s (65.8MB/s), 62.7MiB/s-62.7MiB/s (65.8MB/s-65.8MB/s), io=127MiB (133MB), run=2022-2022msec 00:08:58.670 WRITE: bw=62.9MiB/s (65.9MB/s), 62.9MiB/s-62.9MiB/s (65.9MB/s-65.9MB/s), io=127MiB (133MB), run=2022-2022msec 00:08:58.670 ----------------------------------------------------- 00:08:58.670 Suppressions used: 00:08:58.670 count bytes template 00:08:58.670 1 32 /usr/src/fio/parse.c 00:08:58.670 1 8 libtcmalloc_minimal.so 00:08:58.670 ----------------------------------------------------- 00:08:58.670 00:08:58.670 10:08:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:58.670 10:08:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:58.670 10:08:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:58.670 10:08:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:58.928 10:08:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:58.928 10:08:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:59.185 10:08:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:59.185 10:08:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:59.185 10:08:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.441 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:59.441 fio-3.35 00:08:59.441 Starting 1 thread 00:09:03.625 00:09:03.625 test: (groupid=0, jobs=1): err= 0: pid=64528: Fri Dec 6 10:08:09 2024 00:09:03.625 read: IOPS=14.6k, BW=56.9MiB/s (59.7MB/s)(115MiB/2020msec) 00:09:03.625 slat (nsec): min=3342, max=72724, avg=5095.73, stdev=2504.61 00:09:03.625 clat (usec): min=802, max=66428, avg=3240.92, stdev=3238.12 00:09:03.625 lat (usec): min=807, max=66432, avg=3246.01, stdev=3238.35 00:09:03.625 clat percentiles (usec): 00:09:03.625 | 1.00th=[ 1287], 5.00th=[ 1778], 10.00th=[ 2245], 20.00th=[ 2507], 00:09:03.625 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2704], 00:09:03.625 | 70.00th=[ 2868], 80.00th=[ 3523], 90.00th=[ 4883], 95.00th=[ 5997], 00:09:03.625 | 99.00th=[ 8455], 99.50th=[11731], 99.90th=[64750], 99.95th=[65799], 00:09:03.625 | 99.99th=[66323] 00:09:03.625 bw ( KiB/s): min=30608, max=88862, per=100.00%, avg=58767.50, stdev=27934.70, samples=4 00:09:03.625 iops : min= 7652, max=22215, avg=14691.75, stdev=6983.50, samples=4 00:09:03.625 write: IOPS=14.6k, BW=57.0MiB/s (59.8MB/s)(115MiB/2020msec); 0 zone resets 00:09:03.625 slat (nsec): min=3435, max=67184, avg=5453.53, stdev=2521.54 00:09:03.625 clat (usec): min=867, max=74372, avg=5506.59, stdev=6831.28 00:09:03.625 lat (usec): min=871, max=74377, avg=5512.04, stdev=6831.48 00:09:03.625 clat percentiles (usec): 00:09:03.625 | 1.00th=[ 1483], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:03.625 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2802], 00:09:03.625 | 70.00th=[ 3556], 80.00th=[ 5538], 90.00th=[14615], 95.00th=[21890], 00:09:03.625 | 99.00th=[29230], 99.50th=[33817], 99.90th=[67634], 99.95th=[70779], 00:09:03.625 | 99.99th=[73925] 00:09:03.625 bw ( KiB/s): min=31400, max=88279, per=100.00%, avg=58715.75, stdev=27659.11, samples=4 00:09:03.625 iops : min= 7850, max=22069, avg=14678.75, stdev=6914.51, samples=4 00:09:03.625 lat (usec) : 1000=0.05% 00:09:03.625 lat (msec) : 2=5.07%, 4=73.22%, 10=14.27%, 20=4.08%, 50=3.09% 00:09:03.625 lat (msec) : 100=0.22% 00:09:03.625 cpu : usr=99.31%, sys=0.00%, ctx=13, majf=0, minf=608 00:09:03.625 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:03.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.625 issued rwts: total=29438,29475,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.625 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.625 00:09:03.625 Run status group 0 (all jobs): 00:09:03.625 READ: bw=56.9MiB/s (59.7MB/s), 56.9MiB/s-56.9MiB/s (59.7MB/s-59.7MB/s), io=115MiB (121MB), run=2020-2020msec 00:09:03.625 WRITE: bw=57.0MiB/s (59.8MB/s), 57.0MiB/s-57.0MiB/s (59.8MB/s-59.8MB/s), io=115MiB (121MB), run=2020-2020msec 00:09:03.625 ----------------------------------------------------- 00:09:03.625 Suppressions used: 00:09:03.625 count bytes template 00:09:03.625 1 32 /usr/src/fio/parse.c 00:09:03.625 1 8 libtcmalloc_minimal.so 00:09:03.626 ----------------------------------------------------- 00:09:03.626 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.626 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:03.882 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.882 10:08:09 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.883 10:08:09 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:03.883 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.883 fio-3.35 00:09:03.883 Starting 1 thread 00:09:30.423 00:09:30.423 test: (groupid=0, jobs=1): err= 0: pid=64585: Fri Dec 6 10:08:35 2024 00:09:30.423 read: IOPS=23.1k, BW=90.2MiB/s (94.6MB/s)(180MiB/2001msec) 00:09:30.423 slat (nsec): min=3338, max=64496, avg=4949.70, stdev=2024.22 00:09:30.423 clat (usec): min=234, max=20639, avg=2780.05, stdev=788.98 00:09:30.423 lat (usec): min=239, max=20643, avg=2785.00, stdev=790.18 00:09:30.423 clat percentiles (usec): 00:09:30.423 | 1.00th=[ 1713], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:30.423 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:09:30.423 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 3523], 95.00th=[ 4555], 00:09:30.423 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 7701], 99.95th=[ 8225], 00:09:30.423 | 99.99th=[11207] 00:09:30.423 bw ( KiB/s): min=90728, max=92824, per=99.29%, avg=91701.33, stdev=1055.95, samples=3 00:09:30.423 iops : min=22682, max=23206, avg=22925.33, stdev=263.99, samples=3 00:09:30.423 write: IOPS=23.0k, BW=89.7MiB/s (94.0MB/s)(179MiB/2001msec); 0 zone resets 00:09:30.423 slat (nsec): min=3487, max=84375, avg=5281.74, stdev=2082.57 00:09:30.423 clat (usec): min=214, max=7857, avg=2758.39, stdev=742.38 00:09:30.423 lat (usec): min=218, max=7870, avg=2763.67, stdev=743.63 00:09:30.423 clat percentiles (usec): 00:09:30.423 | 1.00th=[ 1696], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:30.423 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:09:30.423 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 3392], 95.00th=[ 4424], 00:09:30.423 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7373], 99.95th=[ 7635], 00:09:30.423 | 99.99th=[ 7832] 00:09:30.423 bw ( KiB/s): min=90232, max=93224, per=99.97%, avg=91810.67, stdev=1502.84, samples=3 00:09:30.423 iops : min=22558, max=23306, avg=22952.67, stdev=375.71, samples=3 00:09:30.423 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.07% 00:09:30.423 lat (msec) : 2=2.67%, 4=90.76%, 10=6.47%, 20=0.01%, 50=0.01% 00:09:30.423 cpu : usr=99.30%, sys=0.00%, ctx=4, majf=0, minf=606 00:09:30.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:30.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.423 issued rwts: total=46200,45944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.423 00:09:30.423 Run status group 0 (all jobs): 00:09:30.423 READ: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:30.423 WRITE: bw=89.7MiB/s (94.0MB/s), 89.7MiB/s-89.7MiB/s (94.0MB/s-94.0MB/s), io=179MiB (188MB), run=2001-2001msec 00:09:30.423 ----------------------------------------------------- 00:09:30.423 Suppressions used: 00:09:30.423 count bytes template 00:09:30.423 1 32 /usr/src/fio/parse.c 00:09:30.423 1 8 libtcmalloc_minimal.so 00:09:30.423 ----------------------------------------------------- 00:09:30.423 00:09:30.423 ************************************ 00:09:30.423 END TEST nvme_fio 00:09:30.423 ************************************ 00:09:30.423 10:08:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:30.423 10:08:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:30.423 00:09:30.423 real 0m43.804s 00:09:30.423 user 0m19.162s 00:09:30.423 sys 0m45.846s 00:09:30.423 10:08:35 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.423 10:08:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:30.423 ************************************ 00:09:30.423 END TEST nvme 00:09:30.423 ************************************ 00:09:30.423 00:09:30.423 real 1m53.643s 00:09:30.423 user 3m41.346s 00:09:30.423 sys 0m56.341s 00:09:30.423 10:08:36 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.423 10:08:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:30.423 10:08:36 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:30.423 10:08:36 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:30.423 10:08:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.423 10:08:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.423 10:08:36 -- common/autotest_common.sh@10 -- # set +x 00:09:30.423 ************************************ 00:09:30.423 START TEST nvme_scc 00:09:30.423 ************************************ 00:09:30.423 10:08:36 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:30.423 * Looking for test storage... 00:09:30.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.423 10:08:36 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.423 10:08:36 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.423 10:08:36 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.423 10:08:36 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.423 10:08:36 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.423 10:08:36 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.423 10:08:36 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.423 10:08:36 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:30.424 10:08:36 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.424 10:08:36 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.424 --rc genhtml_branch_coverage=1 00:09:30.424 --rc genhtml_function_coverage=1 00:09:30.424 --rc genhtml_legend=1 00:09:30.424 --rc geninfo_all_blocks=1 00:09:30.424 --rc geninfo_unexecuted_blocks=1 00:09:30.424 00:09:30.424 ' 00:09:30.424 10:08:36 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.424 --rc genhtml_branch_coverage=1 00:09:30.424 --rc genhtml_function_coverage=1 00:09:30.424 --rc genhtml_legend=1 00:09:30.424 --rc geninfo_all_blocks=1 00:09:30.424 --rc geninfo_unexecuted_blocks=1 00:09:30.424 00:09:30.424 ' 00:09:30.424 10:08:36 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.424 --rc genhtml_branch_coverage=1 00:09:30.424 --rc genhtml_function_coverage=1 00:09:30.424 --rc genhtml_legend=1 00:09:30.424 --rc geninfo_all_blocks=1 00:09:30.424 --rc geninfo_unexecuted_blocks=1 00:09:30.424 00:09:30.424 ' 00:09:30.424 10:08:36 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.424 --rc genhtml_branch_coverage=1 00:09:30.424 --rc genhtml_function_coverage=1 00:09:30.424 --rc genhtml_legend=1 00:09:30.424 --rc geninfo_all_blocks=1 00:09:30.424 --rc geninfo_unexecuted_blocks=1 00:09:30.424 00:09:30.424 ' 00:09:30.424 10:08:36 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.424 10:08:36 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.424 10:08:36 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.424 10:08:36 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.424 10:08:36 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.424 10:08:36 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:30.424 10:08:36 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.424 10:08:36 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.424 10:08:36 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.424 10:08:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:30.424 10:08:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:30.424 10:08:36 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:30.424 10:08:36 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:30.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.682 Waiting for block devices as requested 00:09:30.682 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.682 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.682 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:30.682 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:35.963 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:35.963 10:08:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:35.963 10:08:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.963 10:08:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:35.963 10:08:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.963 10:08:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:35.963 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:35.964 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.965 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.966 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.967 10:08:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:35.967 10:08:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.967 10:08:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:35.967 10:08:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.967 10:08:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:35.967 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.968 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.969 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.970 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:35.971 10:08:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:35.971 10:08:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:35.971 10:08:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:35.971 10:08:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.971 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:35.972 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.236 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.237 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.238 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:36.239 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.240 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:36.241 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.242 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.243 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.244 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.245 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.246 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:36.247 10:08:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:36.248 10:08:42 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:36.248 10:08:42 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:36.248 10:08:42 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.248 10:08:42 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.248 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.249 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.250 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:36.251 10:08:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:36.251 10:08:42 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:36.252 10:08:42 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:36.252 10:08:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:36.252 10:08:42 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:36.252 10:08:42 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:36.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:37.072 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.072 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.072 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.072 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:37.330 10:08:43 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:37.330 10:08:43 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.330 10:08:43 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.330 10:08:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:37.330 ************************************ 00:09:37.330 START TEST nvme_simple_copy 00:09:37.330 ************************************ 00:09:37.330 10:08:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:37.587 Initializing NVMe Controllers 00:09:37.587 Attaching to 0000:00:10.0 00:09:37.587 Controller supports SCC. Attached to 0000:00:10.0 00:09:37.587 Namespace ID: 1 size: 6GB 00:09:37.587 Initialization complete. 00:09:37.587 00:09:37.587 Controller QEMU NVMe Ctrl (12340 ) 00:09:37.587 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:37.587 Namespace Block Size:4096 00:09:37.587 Writing LBAs 0 to 63 with Random Data 00:09:37.587 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:37.587 LBAs matching Written Data: 64 00:09:37.587 00:09:37.587 real 0m0.355s 00:09:37.587 user 0m0.145s 00:09:37.587 sys 0m0.109s 00:09:37.587 ************************************ 00:09:37.587 END TEST nvme_simple_copy 00:09:37.587 ************************************ 00:09:37.587 10:08:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.587 10:08:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:37.587 ************************************ 00:09:37.587 END TEST nvme_scc 00:09:37.587 ************************************ 00:09:37.587 00:09:37.587 real 0m7.642s 00:09:37.587 user 0m1.136s 00:09:37.587 sys 0m1.372s 00:09:37.587 10:08:43 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.587 10:08:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:37.587 10:08:43 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:37.587 10:08:43 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:37.587 10:08:43 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:37.587 10:08:43 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:37.587 10:08:43 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:37.587 10:08:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.587 10:08:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.587 10:08:43 -- common/autotest_common.sh@10 -- # set +x 00:09:37.587 ************************************ 00:09:37.587 START TEST nvme_fdp 00:09:37.587 ************************************ 00:09:37.587 10:08:43 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:37.858 * Looking for test storage... 00:09:37.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.858 --rc genhtml_branch_coverage=1 00:09:37.858 --rc genhtml_function_coverage=1 00:09:37.858 --rc genhtml_legend=1 00:09:37.858 --rc geninfo_all_blocks=1 00:09:37.858 --rc geninfo_unexecuted_blocks=1 00:09:37.858 00:09:37.858 ' 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.858 --rc genhtml_branch_coverage=1 00:09:37.858 --rc genhtml_function_coverage=1 00:09:37.858 --rc genhtml_legend=1 00:09:37.858 --rc geninfo_all_blocks=1 00:09:37.858 --rc geninfo_unexecuted_blocks=1 00:09:37.858 00:09:37.858 ' 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.858 --rc genhtml_branch_coverage=1 00:09:37.858 --rc genhtml_function_coverage=1 00:09:37.858 --rc genhtml_legend=1 00:09:37.858 --rc geninfo_all_blocks=1 00:09:37.858 --rc geninfo_unexecuted_blocks=1 00:09:37.858 00:09:37.858 ' 00:09:37.858 10:08:43 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.858 --rc genhtml_branch_coverage=1 00:09:37.858 --rc genhtml_function_coverage=1 00:09:37.858 --rc genhtml_legend=1 00:09:37.858 --rc geninfo_all_blocks=1 00:09:37.858 --rc geninfo_unexecuted_blocks=1 00:09:37.858 00:09:37.858 ' 00:09:37.858 10:08:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.858 10:08:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.858 10:08:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.858 10:08:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.858 10:08:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.858 10:08:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:37.858 10:08:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:37.858 10:08:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:37.858 10:08:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.859 10:08:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:38.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.116 Waiting for block devices as requested 00:09:38.116 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.373 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.373 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.373 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:43.647 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:43.647 10:08:49 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:43.647 10:08:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.647 10:08:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:43.647 10:08:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.647 10:08:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.647 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.648 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.649 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.650 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:43.651 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.652 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.653 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:43.654 10:08:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.654 10:08:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:43.654 10:08:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.654 10:08:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.654 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.655 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.656 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.657 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:43.658 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.659 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:43.660 10:08:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.660 10:08:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:43.660 10:08:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.660 10:08:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.660 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:43.661 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.662 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:43.663 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.664 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:43.665 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.666 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.667 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.668 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.669 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.670 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.671 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:43.672 10:08:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:43.672 10:08:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:43.672 10:08:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:43.672 10:08:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.672 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:43.673 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.674 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.933 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.933 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:43.933 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:43.934 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:43.935 10:08:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.935 10:08:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:43.936 10:08:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:43.936 10:08:49 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:43.936 10:08:49 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:44.194 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.763 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.763 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.763 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:44.763 10:08:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:44.763 10:08:50 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:44.763 10:08:50 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.763 10:08:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:44.763 ************************************ 00:09:44.763 START TEST nvme_flexible_data_placement 00:09:44.763 ************************************ 00:09:44.763 10:08:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:45.022 Initializing NVMe Controllers 00:09:45.022 Attaching to 0000:00:13.0 00:09:45.022 Controller supports FDP Attached to 0000:00:13.0 00:09:45.022 Namespace ID: 1 Endurance Group ID: 1 00:09:45.022 Initialization complete. 00:09:45.022 00:09:45.022 ================================== 00:09:45.022 == FDP tests for Namespace: #01 == 00:09:45.022 ================================== 00:09:45.022 00:09:45.022 Get Feature: FDP: 00:09:45.022 ================= 00:09:45.022 Enabled: Yes 00:09:45.022 FDP configuration Index: 0 00:09:45.022 00:09:45.022 FDP configurations log page 00:09:45.022 =========================== 00:09:45.022 Number of FDP configurations: 1 00:09:45.022 Version: 0 00:09:45.022 Size: 112 00:09:45.022 FDP Configuration Descriptor: 0 00:09:45.022 Descriptor Size: 96 00:09:45.022 Reclaim Group Identifier format: 2 00:09:45.022 FDP Volatile Write Cache: Not Present 00:09:45.022 FDP Configuration: Valid 00:09:45.022 Vendor Specific Size: 0 00:09:45.022 Number of Reclaim Groups: 2 00:09:45.022 Number of Recalim Unit Handles: 8 00:09:45.022 Max Placement Identifiers: 128 00:09:45.022 Number of Namespaces Suppprted: 256 00:09:45.022 Reclaim unit Nominal Size: 6000000 bytes 00:09:45.022 Estimated Reclaim Unit Time Limit: Not Reported 00:09:45.022 RUH Desc #000: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #001: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #002: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #003: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #004: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #005: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #006: RUH Type: Initially Isolated 00:09:45.022 RUH Desc #007: RUH Type: Initially Isolated 00:09:45.022 00:09:45.022 FDP reclaim unit handle usage log page 00:09:45.022 ====================================== 00:09:45.022 Number of Reclaim Unit Handles: 8 00:09:45.022 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:45.022 RUH Usage Desc #001: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #002: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #003: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #004: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #005: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #006: RUH Attributes: Unused 00:09:45.022 RUH Usage Desc #007: RUH Attributes: Unused 00:09:45.022 00:09:45.022 FDP statistics log page 00:09:45.022 ======================= 00:09:45.022 Host bytes with metadata written: 846802944 00:09:45.022 Media bytes with metadata written: 846901248 00:09:45.022 Media bytes erased: 0 00:09:45.022 00:09:45.022 FDP Reclaim unit handle status 00:09:45.022 ============================== 00:09:45.022 Number of RUHS descriptors: 2 00:09:45.022 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000386d 00:09:45.022 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:45.022 00:09:45.022 FDP write on placement id: 0 success 00:09:45.022 00:09:45.022 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:45.022 00:09:45.022 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:45.022 00:09:45.022 Get Feature: FDP Events for Placement handle: #0 00:09:45.022 ======================== 00:09:45.022 Number of FDP Events: 6 00:09:45.022 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:45.022 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:45.022 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:45.022 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:45.022 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:45.022 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:45.022 00:09:45.022 FDP events log page 00:09:45.022 =================== 00:09:45.022 Number of FDP events: 1 00:09:45.022 FDP Event #0: 00:09:45.022 Event Type: RU Not Written to Capacity 00:09:45.022 Placement Identifier: Valid 00:09:45.022 NSID: Valid 00:09:45.022 Location: Valid 00:09:45.022 Placement Identifier: 0 00:09:45.022 Event Timestamp: 5 00:09:45.022 Namespace Identifier: 1 00:09:45.022 Reclaim Group Identifier: 0 00:09:45.022 Reclaim Unit Handle Identifier: 0 00:09:45.022 00:09:45.022 FDP test passed 00:09:45.022 00:09:45.022 real 0m0.333s 00:09:45.022 user 0m0.124s 00:09:45.022 sys 0m0.107s 00:09:45.022 10:08:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.022 10:08:51 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:45.022 ************************************ 00:09:45.022 END TEST nvme_flexible_data_placement 00:09:45.022 ************************************ 00:09:45.022 ************************************ 00:09:45.022 END TEST nvme_fdp 00:09:45.022 ************************************ 00:09:45.022 00:09:45.023 real 0m7.460s 00:09:45.023 user 0m1.056s 00:09:45.023 sys 0m1.351s 00:09:45.023 10:08:51 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.023 10:08:51 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:45.299 10:08:51 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:45.299 10:08:51 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:45.299 10:08:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.299 10:08:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.299 10:08:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.299 ************************************ 00:09:45.299 START TEST nvme_rpc 00:09:45.299 ************************************ 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:45.299 * Looking for test storage... 00:09:45.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.299 10:08:51 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.299 --rc genhtml_branch_coverage=1 00:09:45.299 --rc genhtml_function_coverage=1 00:09:45.299 --rc genhtml_legend=1 00:09:45.299 --rc geninfo_all_blocks=1 00:09:45.299 --rc geninfo_unexecuted_blocks=1 00:09:45.299 00:09:45.299 ' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.299 --rc genhtml_branch_coverage=1 00:09:45.299 --rc genhtml_function_coverage=1 00:09:45.299 --rc genhtml_legend=1 00:09:45.299 --rc geninfo_all_blocks=1 00:09:45.299 --rc geninfo_unexecuted_blocks=1 00:09:45.299 00:09:45.299 ' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.299 --rc genhtml_branch_coverage=1 00:09:45.299 --rc genhtml_function_coverage=1 00:09:45.299 --rc genhtml_legend=1 00:09:45.299 --rc geninfo_all_blocks=1 00:09:45.299 --rc geninfo_unexecuted_blocks=1 00:09:45.299 00:09:45.299 ' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.299 --rc genhtml_branch_coverage=1 00:09:45.299 --rc genhtml_function_coverage=1 00:09:45.299 --rc genhtml_legend=1 00:09:45.299 --rc geninfo_all_blocks=1 00:09:45.299 --rc geninfo_unexecuted_blocks=1 00:09:45.299 00:09:45.299 ' 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65966 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:45.299 10:08:51 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65966 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65966 ']' 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.299 10:08:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.558 [2024-12-06 10:08:51.493261] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:45.558 [2024-12-06 10:08:51.493384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65966 ] 00:09:45.558 [2024-12-06 10:08:51.651934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.816 [2024-12-06 10:08:51.754454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.816 [2024-12-06 10:08:51.754494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.382 10:08:52 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.382 10:08:52 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:46.382 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:46.640 Nvme0n1 00:09:46.640 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:46.640 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:46.640 request: 00:09:46.640 { 00:09:46.640 "bdev_name": "Nvme0n1", 00:09:46.640 "filename": "non_existing_file", 00:09:46.640 "method": "bdev_nvme_apply_firmware", 00:09:46.640 "req_id": 1 00:09:46.640 } 00:09:46.640 Got JSON-RPC error response 00:09:46.640 response: 00:09:46.640 { 00:09:46.640 "code": -32603, 00:09:46.640 "message": "open file failed." 00:09:46.640 } 00:09:46.640 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:46.640 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:46.640 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:46.899 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:46.899 10:08:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65966 00:09:46.899 10:08:52 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65966 ']' 00:09:46.899 10:08:52 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65966 00:09:46.899 10:08:52 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65966 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.899 killing process with pid 65966 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65966' 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65966 00:09:46.899 10:08:53 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65966 00:09:48.798 00:09:48.798 real 0m3.257s 00:09:48.798 user 0m6.165s 00:09:48.798 sys 0m0.496s 00:09:48.798 10:08:54 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.798 10:08:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.798 ************************************ 00:09:48.798 END TEST nvme_rpc 00:09:48.798 ************************************ 00:09:48.798 10:08:54 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.798 10:08:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.798 10:08:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.798 10:08:54 -- common/autotest_common.sh@10 -- # set +x 00:09:48.798 ************************************ 00:09:48.798 START TEST nvme_rpc_timeouts 00:09:48.798 ************************************ 00:09:48.798 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:48.798 * Looking for test storage... 00:09:48.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.799 10:08:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:48.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.799 --rc genhtml_branch_coverage=1 00:09:48.799 --rc genhtml_function_coverage=1 00:09:48.799 --rc genhtml_legend=1 00:09:48.799 --rc geninfo_all_blocks=1 00:09:48.799 --rc geninfo_unexecuted_blocks=1 00:09:48.799 00:09:48.799 ' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.799 --rc genhtml_branch_coverage=1 00:09:48.799 --rc genhtml_function_coverage=1 00:09:48.799 --rc genhtml_legend=1 00:09:48.799 --rc geninfo_all_blocks=1 00:09:48.799 --rc geninfo_unexecuted_blocks=1 00:09:48.799 00:09:48.799 ' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.799 --rc genhtml_branch_coverage=1 00:09:48.799 --rc genhtml_function_coverage=1 00:09:48.799 --rc genhtml_legend=1 00:09:48.799 --rc geninfo_all_blocks=1 00:09:48.799 --rc geninfo_unexecuted_blocks=1 00:09:48.799 00:09:48.799 ' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.799 --rc genhtml_branch_coverage=1 00:09:48.799 --rc genhtml_function_coverage=1 00:09:48.799 --rc genhtml_legend=1 00:09:48.799 --rc geninfo_all_blocks=1 00:09:48.799 --rc geninfo_unexecuted_blocks=1 00:09:48.799 00:09:48.799 ' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66031 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66031 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66063 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66063 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66063 ']' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.799 10:08:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:48.799 10:08:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:48.799 [2024-12-06 10:08:54.717516] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:09:48.799 [2024-12-06 10:08:54.717644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66063 ] 00:09:48.799 [2024-12-06 10:08:54.869532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:49.058 [2024-12-06 10:08:54.972844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.058 [2024-12-06 10:08:54.972968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.622 10:08:55 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.622 10:08:55 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:49.622 10:08:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:49.622 Checking default timeout settings: 00:09:49.622 10:08:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:49.879 10:08:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:49.879 Making settings changes with rpc: 00:09:49.879 10:08:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:50.137 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:50.137 Check default vs. modified settings: 00:09:50.137 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66031 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66031 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:50.395 Setting action_on_timeout is changed as expected. 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66031 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66031 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:50.395 Setting timeout_us is changed as expected. 00:09:50.395 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66031 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66031 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:50.396 Setting timeout_admin_us is changed as expected. 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66031 /tmp/settings_modified_66031 00:09:50.396 10:08:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66063 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66063 ']' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66063 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66063 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66063' 00:09:50.396 killing process with pid 66063 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66063 00:09:50.396 10:08:56 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66063 00:09:51.769 RPC TIMEOUT SETTING TEST PASSED. 00:09:51.769 10:08:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:51.769 00:09:51.769 real 0m3.207s 00:09:51.769 user 0m6.268s 00:09:51.769 sys 0m0.503s 00:09:51.769 10:08:57 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.769 10:08:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:51.769 ************************************ 00:09:51.769 END TEST nvme_rpc_timeouts 00:09:51.769 ************************************ 00:09:51.769 10:08:57 -- spdk/autotest.sh@239 -- # uname -s 00:09:51.769 10:08:57 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:51.769 10:08:57 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.769 10:08:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.769 10:08:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.769 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:09:51.769 ************************************ 00:09:51.769 START TEST sw_hotplug 00:09:51.769 ************************************ 00:09:51.769 10:08:57 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:51.769 * Looking for test storage... 00:09:51.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.769 10:08:57 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:51.769 10:08:57 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:51.769 10:08:57 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:51.769 10:08:57 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.770 10:08:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:51.770 10:08:57 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.770 10:08:57 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.770 --rc genhtml_branch_coverage=1 00:09:51.770 --rc genhtml_function_coverage=1 00:09:51.770 --rc genhtml_legend=1 00:09:51.770 --rc geninfo_all_blocks=1 00:09:51.770 --rc geninfo_unexecuted_blocks=1 00:09:51.770 00:09:51.770 ' 00:09:51.770 10:08:57 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.770 --rc genhtml_branch_coverage=1 00:09:51.770 --rc genhtml_function_coverage=1 00:09:51.770 --rc genhtml_legend=1 00:09:51.770 --rc geninfo_all_blocks=1 00:09:51.770 --rc geninfo_unexecuted_blocks=1 00:09:51.770 00:09:51.770 ' 00:09:51.770 10:08:57 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.770 --rc genhtml_branch_coverage=1 00:09:51.770 --rc genhtml_function_coverage=1 00:09:51.770 --rc genhtml_legend=1 00:09:51.770 --rc geninfo_all_blocks=1 00:09:51.770 --rc geninfo_unexecuted_blocks=1 00:09:51.770 00:09:51.770 ' 00:09:51.770 10:08:57 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:51.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.770 --rc genhtml_branch_coverage=1 00:09:51.770 --rc genhtml_function_coverage=1 00:09:51.770 --rc genhtml_legend=1 00:09:51.770 --rc geninfo_all_blocks=1 00:09:51.770 --rc geninfo_unexecuted_blocks=1 00:09:51.770 00:09:51.770 ' 00:09:51.770 10:08:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.286 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:52.286 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:52.286 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:52.286 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:52.286 10:08:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:52.286 10:08:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:52.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.801 Waiting for block devices as requested 00:09:52.801 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.801 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.801 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.058 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:58.399 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:58.399 10:09:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:58.399 10:09:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:58.399 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:58.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.399 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:58.661 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:58.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.930 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:58.930 10:09:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66914 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:59.195 10:09:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:59.195 10:09:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:59.195 10:09:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:59.195 10:09:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:59.195 10:09:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:59.195 10:09:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:59.195 Initializing NVMe Controllers 00:09:59.195 Attaching to 0000:00:10.0 00:09:59.195 Attaching to 0000:00:11.0 00:09:59.195 Attached to 0000:00:10.0 00:09:59.195 Attached to 0000:00:11.0 00:09:59.195 Initialization complete. Starting I/O... 00:09:59.195 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:59.195 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:59.195 00:10:00.566 QEMU NVMe Ctrl (12340 ): 2533 I/Os completed (+2533) 00:10:00.566 QEMU NVMe Ctrl (12341 ): 2434 I/Os completed (+2434) 00:10:00.566 00:10:01.526 QEMU NVMe Ctrl (12340 ): 5738 I/Os completed (+3205) 00:10:01.526 QEMU NVMe Ctrl (12341 ): 5551 I/Os completed (+3117) 00:10:01.526 00:10:02.467 QEMU NVMe Ctrl (12340 ): 8721 I/Os completed (+2983) 00:10:02.467 QEMU NVMe Ctrl (12341 ): 8605 I/Os completed (+3054) 00:10:02.467 00:10:03.400 QEMU NVMe Ctrl (12340 ): 11763 I/Os completed (+3042) 00:10:03.400 QEMU NVMe Ctrl (12341 ): 11642 I/Os completed (+3037) 00:10:03.400 00:10:04.332 QEMU NVMe Ctrl (12340 ): 14850 I/Os completed (+3087) 00:10:04.332 QEMU NVMe Ctrl (12341 ): 14794 I/Os completed (+3152) 00:10:04.332 00:10:05.264 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:05.264 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.264 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.264 [2024-12-06 10:09:11.140184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:05.264 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:05.264 [2024-12-06 10:09:11.144022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.144173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.144236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.144309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:05.264 [2024-12-06 10:09:11.146283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.146333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.146347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.146364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:05.264 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:05.264 [2024-12-06 10:09:11.161054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:05.264 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:05.264 [2024-12-06 10:09:11.162156] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.162197] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.162217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.264 [2024-12-06 10:09:11.162233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.265 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:05.265 [2024-12-06 10:09:11.163969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.265 [2024-12-06 10:09:11.164010] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.265 [2024-12-06 10:09:11.164025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.265 [2024-12-06 10:09:11.164037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:05.265 Attaching to 0000:00:10.0 00:10:05.265 Attached to 0000:00:10.0 00:10:05.265 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:05.265 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:05.265 10:09:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:05.265 Attaching to 0000:00:11.0 00:10:05.265 Attached to 0000:00:11.0 00:10:06.249 QEMU NVMe Ctrl (12340 ): 3050 I/Os completed (+3050) 00:10:06.249 QEMU NVMe Ctrl (12341 ): 2825 I/Os completed (+2825) 00:10:06.249 00:10:07.183 QEMU NVMe Ctrl (12340 ): 6256 I/Os completed (+3206) 00:10:07.183 QEMU NVMe Ctrl (12341 ): 6028 I/Os completed (+3203) 00:10:07.183 00:10:08.557 QEMU NVMe Ctrl (12340 ): 9919 I/Os completed (+3663) 00:10:08.557 QEMU NVMe Ctrl (12341 ): 9700 I/Os completed (+3672) 00:10:08.557 00:10:09.490 QEMU NVMe Ctrl (12340 ): 13536 I/Os completed (+3617) 00:10:09.490 QEMU NVMe Ctrl (12341 ): 13340 I/Os completed (+3640) 00:10:09.490 00:10:10.421 QEMU NVMe Ctrl (12340 ): 17234 I/Os completed (+3698) 00:10:10.421 QEMU NVMe Ctrl (12341 ): 17020 I/Os completed (+3680) 00:10:10.421 00:10:11.357 QEMU NVMe Ctrl (12340 ): 20945 I/Os completed (+3711) 00:10:11.357 QEMU NVMe Ctrl (12341 ): 20718 I/Os completed (+3698) 00:10:11.357 00:10:12.291 QEMU NVMe Ctrl (12340 ): 24602 I/Os completed (+3657) 00:10:12.291 QEMU NVMe Ctrl (12341 ): 24394 I/Os completed (+3676) 00:10:12.291 00:10:13.223 QEMU NVMe Ctrl (12340 ): 28272 I/Os completed (+3670) 00:10:13.223 QEMU NVMe Ctrl (12341 ): 28069 I/Os completed (+3675) 00:10:13.223 00:10:14.594 QEMU NVMe Ctrl (12340 ): 31809 I/Os completed (+3537) 00:10:14.594 QEMU NVMe Ctrl (12341 ): 31581 I/Os completed (+3512) 00:10:14.594 00:10:15.162 QEMU NVMe Ctrl (12340 ): 34978 I/Os completed (+3169) 00:10:15.162 QEMU NVMe Ctrl (12341 ): 34720 I/Os completed (+3139) 00:10:15.162 00:10:16.533 QEMU NVMe Ctrl (12340 ): 38108 I/Os completed (+3130) 00:10:16.533 QEMU NVMe Ctrl (12341 ): 37926 I/Os completed (+3206) 00:10:16.533 00:10:17.465 QEMU NVMe Ctrl (12340 ): 41605 I/Os completed (+3497) 00:10:17.465 QEMU NVMe Ctrl (12341 ): 41338 I/Os completed (+3412) 00:10:17.465 00:10:17.465 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:17.465 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:17.465 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.465 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.465 [2024-12-06 10:09:23.401791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:17.465 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:17.465 [2024-12-06 10:09:23.402950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.465 [2024-12-06 10:09:23.402999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.465 [2024-12-06 10:09:23.403022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.465 [2024-12-06 10:09:23.403041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.465 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:17.465 [2024-12-06 10:09:23.404925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.404973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.404987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.405001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:17.466 [2024-12-06 10:09:23.424825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:17.466 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:17.466 [2024-12-06 10:09:23.425943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.426058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.426098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.426165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:17.466 [2024-12-06 10:09:23.427914] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.427952] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.427968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 [2024-12-06 10:09:23.427982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:17.466 EAL: Cannot open sysfs resource 00:10:17.466 EAL: pci_scan_one(): cannot parse resource 00:10:17.466 EAL: Scan for (pci) bus failed. 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:17.466 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:17.466 Attaching to 0000:00:10.0 00:10:17.466 Attached to 0000:00:10.0 00:10:17.723 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:17.723 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:17.723 10:09:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:17.723 Attaching to 0000:00:11.0 00:10:17.723 Attached to 0000:00:11.0 00:10:18.287 QEMU NVMe Ctrl (12340 ): 2393 I/Os completed (+2393) 00:10:18.287 QEMU NVMe Ctrl (12341 ): 2036 I/Os completed (+2036) 00:10:18.287 00:10:19.219 QEMU NVMe Ctrl (12340 ): 5506 I/Os completed (+3113) 00:10:19.219 QEMU NVMe Ctrl (12341 ): 5082 I/Os completed (+3046) 00:10:19.219 00:10:20.591 QEMU NVMe Ctrl (12340 ): 8773 I/Os completed (+3267) 00:10:20.591 QEMU NVMe Ctrl (12341 ): 8336 I/Os completed (+3254) 00:10:20.591 00:10:21.525 QEMU NVMe Ctrl (12340 ): 12188 I/Os completed (+3415) 00:10:21.525 QEMU NVMe Ctrl (12341 ): 11775 I/Os completed (+3439) 00:10:21.525 00:10:22.460 QEMU NVMe Ctrl (12340 ): 15820 I/Os completed (+3632) 00:10:22.460 QEMU NVMe Ctrl (12341 ): 15414 I/Os completed (+3639) 00:10:22.460 00:10:23.395 QEMU NVMe Ctrl (12340 ): 19448 I/Os completed (+3628) 00:10:23.395 QEMU NVMe Ctrl (12341 ): 19053 I/Os completed (+3639) 00:10:23.395 00:10:24.365 QEMU NVMe Ctrl (12340 ): 23058 I/Os completed (+3610) 00:10:24.365 QEMU NVMe Ctrl (12341 ): 22645 I/Os completed (+3592) 00:10:24.365 00:10:25.299 QEMU NVMe Ctrl (12340 ): 26653 I/Os completed (+3595) 00:10:25.299 QEMU NVMe Ctrl (12341 ): 26255 I/Os completed (+3610) 00:10:25.299 00:10:26.233 QEMU NVMe Ctrl (12340 ): 30242 I/Os completed (+3589) 00:10:26.233 QEMU NVMe Ctrl (12341 ): 29628 I/Os completed (+3373) 00:10:26.233 00:10:27.174 QEMU NVMe Ctrl (12340 ): 33295 I/Os completed (+3053) 00:10:27.175 QEMU NVMe Ctrl (12341 ): 32664 I/Os completed (+3036) 00:10:27.175 00:10:28.557 QEMU NVMe Ctrl (12340 ): 36733 I/Os completed (+3438) 00:10:28.557 QEMU NVMe Ctrl (12341 ): 36096 I/Os completed (+3432) 00:10:28.557 00:10:29.498 QEMU NVMe Ctrl (12340 ): 40345 I/Os completed (+3612) 00:10:29.498 QEMU NVMe Ctrl (12341 ): 39714 I/Os completed (+3618) 00:10:29.498 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.756 [2024-12-06 10:09:35.679621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:29.756 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:29.756 [2024-12-06 10:09:35.680846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.680975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.681050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.681084] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:29.756 [2024-12-06 10:09:35.683323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.683435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.683483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.683548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:29.756 [2024-12-06 10:09:35.701118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:29.756 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:29.756 [2024-12-06 10:09:35.702287] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.702326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.702345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.702362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:29.756 [2024-12-06 10:09:35.704144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.704247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.704269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 [2024-12-06 10:09:35.704281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:29.756 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:29.756 EAL: Scan for (pci) bus failed. 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:29.756 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:29.756 Attaching to 0000:00:10.0 00:10:29.756 Attached to 0000:00:10.0 00:10:30.016 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:30.016 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:30.016 10:09:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:30.016 Attaching to 0000:00:11.0 00:10:30.016 Attached to 0000:00:11.0 00:10:30.016 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.016 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.016 [2024-12-06 10:09:35.982821] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:42.235 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:42.235 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:42.235 10:09:47 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.84 00:10:42.235 10:09:47 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.84 00:10:42.235 10:09:47 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:42.235 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:42.235 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:42.235 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 10:09:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66914 00:10:48.811 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66914) - No such process 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66914 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67462 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67462 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67462 ']' 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.811 10:09:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 10:09:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:48.811 [2024-12-06 10:09:54.062482] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:10:48.811 [2024-12-06 10:09:54.062868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67462 ] 00:10:48.811 [2024-12-06 10:09:54.221202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.811 [2024-12-06 10:09:54.317328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:48.811 10:09:54 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:48.811 10:09:54 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.388 10:10:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.388 10:10:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.388 10:10:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:55.388 10:10:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:55.388 [2024-12-06 10:10:01.001268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:55.388 [2024-12-06 10:10:01.002678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.002717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.002729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.002745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.002753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.002761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.002768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.002776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.002783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.002794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.002800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.002808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.401266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:55.388 [2024-12-06 10:10:01.402656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.402690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.402702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.402717] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.402726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.402733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.402742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.402749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.402757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 [2024-12-06 10:10:01.402764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:55.388 [2024-12-06 10:10:01.402772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:55.388 [2024-12-06 10:10:01.402779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:55.388 10:10:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.388 10:10:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.388 10:10:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:55.388 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:55.647 10:10:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.859 10:10:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.859 10:10:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.859 10:10:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:07.859 [2024-12-06 10:10:13.801483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:07.859 [2024-12-06 10:10:13.802948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.859 [2024-12-06 10:10:13.803070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.859 [2024-12-06 10:10:13.803139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.859 [2024-12-06 10:10:13.803198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.859 [2024-12-06 10:10:13.803217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.859 [2024-12-06 10:10:13.803243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.859 [2024-12-06 10:10:13.803379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.859 [2024-12-06 10:10:13.803400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.859 [2024-12-06 10:10:13.803424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.859 [2024-12-06 10:10:13.803459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:07.859 [2024-12-06 10:10:13.803510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:07.859 [2024-12-06 10:10:13.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:07.859 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:07.860 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:07.860 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:07.860 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:07.860 10:10:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.860 10:10:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.860 10:10:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.860 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:07.860 10:10:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:08.118 [2024-12-06 10:10:14.201489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:08.118 [2024-12-06 10:10:14.202813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.118 [2024-12-06 10:10:14.202848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.118 [2024-12-06 10:10:14.202861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.118 [2024-12-06 10:10:14.202876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.118 [2024-12-06 10:10:14.202886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.118 [2024-12-06 10:10:14.202893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.118 [2024-12-06 10:10:14.202902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.118 [2024-12-06 10:10:14.202908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.118 [2024-12-06 10:10:14.202916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.118 [2024-12-06 10:10:14.202924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:08.118 [2024-12-06 10:10:14.202931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:08.118 [2024-12-06 10:10:14.202938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:08.378 10:10:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:08.378 10:10:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:08.378 10:10:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:08.378 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:08.636 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:08.636 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:08.636 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:08.636 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.876 [2024-12-06 10:10:26.701688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:20.876 [2024-12-06 10:10:26.703094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.876 [2024-12-06 10:10:26.703131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.876 [2024-12-06 10:10:26.703142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.876 [2024-12-06 10:10:26.703159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.876 [2024-12-06 10:10:26.703167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.876 [2024-12-06 10:10:26.703178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.876 [2024-12-06 10:10:26.703185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.876 [2024-12-06 10:10:26.703194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.876 [2024-12-06 10:10:26.703201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.876 [2024-12-06 10:10:26.703209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:20.876 [2024-12-06 10:10:26.703215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:20.876 [2024-12-06 10:10:26.703223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:20.876 10:10:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:20.876 10:10:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:21.138 [2024-12-06 10:10:27.201691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:21.138 [2024-12-06 10:10:27.202981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.138 [2024-12-06 10:10:27.203013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.138 [2024-12-06 10:10:27.203025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.138 [2024-12-06 10:10:27.203041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.138 [2024-12-06 10:10:27.203049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.138 [2024-12-06 10:10:27.203056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.138 [2024-12-06 10:10:27.203064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.138 [2024-12-06 10:10:27.203070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.138 [2024-12-06 10:10:27.203080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.138 [2024-12-06 10:10:27.203087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.138 [2024-12-06 10:10:27.203095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.138 [2024-12-06 10:10:27.203102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.138 10:10:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.138 10:10:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.138 10:10:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:21.138 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:21.400 10:10:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.63 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.63 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.63 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.63 2 00:11:33.700 remove_attach_helper took 44.63s to complete (handling 2 nvme drive(s)) 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:33.700 10:10:39 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:33.700 10:10:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.295 10:10:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.295 10:10:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.295 10:10:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:40.295 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:40.295 [2024-12-06 10:10:45.656877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:40.295 [2024-12-06 10:10:45.657961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:45.658062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:45.658126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:45.658187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:45.658206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:45.658318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:45.658346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:45.658392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:45.658418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:45.658483] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:45.658504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:45.658574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:46.056882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:40.295 [2024-12-06 10:10:46.058025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:46.058127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:46.058191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:46.058254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:46.058274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:46.058322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:46.058348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:46.058394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:46.058422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 [2024-12-06 10:10:46.058487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.295 [2024-12-06 10:10:46.058509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.295 [2024-12-06 10:10:46.058533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.295 10:10:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.295 10:10:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.295 10:10:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:40.295 10:10:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.532 10:10:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:52.532 10:10:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.532 [2024-12-06 10:10:58.557070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.532 [2024-12-06 10:10:58.558069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.532 [2024-12-06 10:10:58.558171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.532 [2024-12-06 10:10:58.558228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.532 [2024-12-06 10:10:58.558294] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.532 [2024-12-06 10:10:58.558312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.532 [2024-12-06 10:10:58.558368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.532 [2024-12-06 10:10:58.558395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.532 [2024-12-06 10:10:58.558413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.532 [2024-12-06 10:10:58.558470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.532 [2024-12-06 10:10:58.558499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.532 [2024-12-06 10:10:58.558544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.532 [2024-12-06 10:10:58.558571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.107 10:10:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.107 10:10:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.107 10:10:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:53.107 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:53.107 [2024-12-06 10:10:59.257079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:53.107 [2024-12-06 10:10:59.258147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.107 [2024-12-06 10:10:59.258253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.107 [2024-12-06 10:10:59.258314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.107 [2024-12-06 10:10:59.258369] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.107 [2024-12-06 10:10:59.258393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.107 [2024-12-06 10:10:59.258417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.107 [2024-12-06 10:10:59.258442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.107 [2024-12-06 10:10:59.258474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.107 [2024-12-06 10:10:59.258534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.107 [2024-12-06 10:10:59.258559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:53.107 [2024-12-06 10:10:59.258576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:53.107 [2024-12-06 10:10:59.258599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.680 10:10:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.680 10:10:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.680 10:10:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.680 10:10:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.918 [2024-12-06 10:11:11.857300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.918 [2024-12-06 10:11:11.858567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.918 [2024-12-06 10:11:11.858662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.918 [2024-12-06 10:11:11.858698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.918 [2024-12-06 10:11:11.858733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.918 [2024-12-06 10:11:11.858751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.918 [2024-12-06 10:11:11.858775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.918 [2024-12-06 10:11:11.858799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.918 [2024-12-06 10:11:11.858820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.918 [2024-12-06 10:11:11.858845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.918 [2024-12-06 10:11:11.858869] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.918 [2024-12-06 10:11:11.858885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.918 [2024-12-06 10:11:11.858909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.918 10:11:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.918 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.489 [2024-12-06 10:11:12.357307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:06.489 [2024-12-06 10:11:12.358344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.489 [2024-12-06 10:11:12.358376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.489 [2024-12-06 10:11:12.358389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.489 [2024-12-06 10:11:12.358403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.489 [2024-12-06 10:11:12.358411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.489 [2024-12-06 10:11:12.358418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.489 [2024-12-06 10:11:12.358427] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.489 [2024-12-06 10:11:12.358434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.489 [2024-12-06 10:11:12.358442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.489 [2024-12-06 10:11:12.358457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.489 [2024-12-06 10:11:12.358468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.489 [2024-12-06 10:11:12.358474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.489 10:11:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.489 10:11:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.489 10:11:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.489 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:06.748 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:06.748 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.748 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:12:18.961 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:18.961 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67462 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67462 ']' 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67462 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67462 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67462' 00:12:18.961 killing process with pid 67462 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67462 00:12:18.961 10:11:24 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67462 00:12:19.902 10:11:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:20.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.733 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.733 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:20.733 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.733 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.994 00:12:20.994 real 2m29.161s 00:12:20.994 user 1m51.139s 00:12:20.994 sys 0m16.619s 00:12:20.994 10:11:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.994 10:11:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.994 ************************************ 00:12:20.994 END TEST sw_hotplug 00:12:20.994 ************************************ 00:12:20.994 10:11:26 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:20.994 10:11:26 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:20.994 10:11:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:20.995 10:11:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.995 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:12:20.995 ************************************ 00:12:20.995 START TEST nvme_xnvme 00:12:20.995 ************************************ 00:12:20.995 10:11:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:20.995 * Looking for test storage... 00:12:20.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.995 10:11:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:20.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.995 --rc genhtml_branch_coverage=1 00:12:20.995 --rc genhtml_function_coverage=1 00:12:20.995 --rc genhtml_legend=1 00:12:20.995 --rc geninfo_all_blocks=1 00:12:20.995 --rc geninfo_unexecuted_blocks=1 00:12:20.995 00:12:20.995 ' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:20.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.995 --rc genhtml_branch_coverage=1 00:12:20.995 --rc genhtml_function_coverage=1 00:12:20.995 --rc genhtml_legend=1 00:12:20.995 --rc geninfo_all_blocks=1 00:12:20.995 --rc geninfo_unexecuted_blocks=1 00:12:20.995 00:12:20.995 ' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:20.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.995 --rc genhtml_branch_coverage=1 00:12:20.995 --rc genhtml_function_coverage=1 00:12:20.995 --rc genhtml_legend=1 00:12:20.995 --rc geninfo_all_blocks=1 00:12:20.995 --rc geninfo_unexecuted_blocks=1 00:12:20.995 00:12:20.995 ' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:20.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.995 --rc genhtml_branch_coverage=1 00:12:20.995 --rc genhtml_function_coverage=1 00:12:20.995 --rc genhtml_legend=1 00:12:20.995 --rc geninfo_all_blocks=1 00:12:20.995 --rc geninfo_unexecuted_blocks=1 00:12:20.995 00:12:20.995 ' 00:12:20.995 10:11:27 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:20.995 10:11:27 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:20.995 10:11:27 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:20.995 10:11:27 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:20.996 10:11:27 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:20.996 10:11:27 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:20.996 10:11:27 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:20.996 10:11:27 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:21.261 #define SPDK_CONFIG_H 00:12:21.261 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:21.261 #define SPDK_CONFIG_APPS 1 00:12:21.261 #define SPDK_CONFIG_ARCH native 00:12:21.261 #define SPDK_CONFIG_ASAN 1 00:12:21.261 #undef SPDK_CONFIG_AVAHI 00:12:21.261 #undef SPDK_CONFIG_CET 00:12:21.261 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:21.261 #define SPDK_CONFIG_COVERAGE 1 00:12:21.261 #define SPDK_CONFIG_CROSS_PREFIX 00:12:21.261 #undef SPDK_CONFIG_CRYPTO 00:12:21.261 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:21.261 #undef SPDK_CONFIG_CUSTOMOCF 00:12:21.261 #undef SPDK_CONFIG_DAOS 00:12:21.261 #define SPDK_CONFIG_DAOS_DIR 00:12:21.261 #define SPDK_CONFIG_DEBUG 1 00:12:21.261 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:21.261 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:21.261 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:21.261 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:21.261 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:21.261 #undef SPDK_CONFIG_DPDK_UADK 00:12:21.261 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:21.261 #define SPDK_CONFIG_EXAMPLES 1 00:12:21.261 #undef SPDK_CONFIG_FC 00:12:21.261 #define SPDK_CONFIG_FC_PATH 00:12:21.261 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:21.261 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:21.261 #define SPDK_CONFIG_FSDEV 1 00:12:21.261 #undef SPDK_CONFIG_FUSE 00:12:21.261 #undef SPDK_CONFIG_FUZZER 00:12:21.261 #define SPDK_CONFIG_FUZZER_LIB 00:12:21.261 #undef SPDK_CONFIG_GOLANG 00:12:21.261 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:21.261 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:21.261 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:21.261 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:21.261 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:21.261 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:21.261 #undef SPDK_CONFIG_HAVE_LZ4 00:12:21.261 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:21.261 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:21.261 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:21.261 #define SPDK_CONFIG_IDXD 1 00:12:21.261 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:21.261 #undef SPDK_CONFIG_IPSEC_MB 00:12:21.261 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:21.261 #define SPDK_CONFIG_ISAL 1 00:12:21.261 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:21.261 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:21.261 #define SPDK_CONFIG_LIBDIR 00:12:21.261 #undef SPDK_CONFIG_LTO 00:12:21.261 #define SPDK_CONFIG_MAX_LCORES 128 00:12:21.261 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:21.261 #define SPDK_CONFIG_NVME_CUSE 1 00:12:21.261 #undef SPDK_CONFIG_OCF 00:12:21.261 #define SPDK_CONFIG_OCF_PATH 00:12:21.261 #define SPDK_CONFIG_OPENSSL_PATH 00:12:21.261 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:21.261 #define SPDK_CONFIG_PGO_DIR 00:12:21.261 #undef SPDK_CONFIG_PGO_USE 00:12:21.261 #define SPDK_CONFIG_PREFIX /usr/local 00:12:21.261 #undef SPDK_CONFIG_RAID5F 00:12:21.261 #undef SPDK_CONFIG_RBD 00:12:21.261 #define SPDK_CONFIG_RDMA 1 00:12:21.261 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:21.261 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:21.261 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:21.261 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:21.261 #define SPDK_CONFIG_SHARED 1 00:12:21.261 #undef SPDK_CONFIG_SMA 00:12:21.261 #define SPDK_CONFIG_TESTS 1 00:12:21.261 #undef SPDK_CONFIG_TSAN 00:12:21.261 #define SPDK_CONFIG_UBLK 1 00:12:21.261 #define SPDK_CONFIG_UBSAN 1 00:12:21.261 #undef SPDK_CONFIG_UNIT_TESTS 00:12:21.261 #undef SPDK_CONFIG_URING 00:12:21.261 #define SPDK_CONFIG_URING_PATH 00:12:21.261 #undef SPDK_CONFIG_URING_ZNS 00:12:21.261 #undef SPDK_CONFIG_USDT 00:12:21.261 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:21.261 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:21.261 #undef SPDK_CONFIG_VFIO_USER 00:12:21.261 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:21.261 #define SPDK_CONFIG_VHOST 1 00:12:21.261 #define SPDK_CONFIG_VIRTIO 1 00:12:21.261 #undef SPDK_CONFIG_VTUNE 00:12:21.261 #define SPDK_CONFIG_VTUNE_DIR 00:12:21.261 #define SPDK_CONFIG_WERROR 1 00:12:21.261 #define SPDK_CONFIG_WPDK_DIR 00:12:21.261 #define SPDK_CONFIG_XNVME 1 00:12:21.261 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:21.261 10:11:27 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:21.261 10:11:27 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.261 10:11:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.261 10:11:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.261 10:11:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.261 10:11:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.262 10:11:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.262 10:11:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.262 10:11:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.262 10:11:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:21.262 10:11:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:21.262 10:11:27 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:21.262 10:11:27 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:21.263 10:11:27 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68807 ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68807 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.aIVl6n 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.aIVl6n/tests/xnvme /tmp/spdk.aIVl6n 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976530944 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591224320 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976530944 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591224320 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.264 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=95393210368 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4309569536 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:21.265 * Looking for test storage... 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976530944 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.265 10:11:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.265 --rc genhtml_branch_coverage=1 00:12:21.265 --rc genhtml_function_coverage=1 00:12:21.265 --rc genhtml_legend=1 00:12:21.265 --rc geninfo_all_blocks=1 00:12:21.265 --rc geninfo_unexecuted_blocks=1 00:12:21.265 00:12:21.265 ' 00:12:21.265 10:11:27 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:21.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.265 --rc genhtml_branch_coverage=1 00:12:21.265 --rc genhtml_function_coverage=1 00:12:21.266 --rc genhtml_legend=1 00:12:21.266 --rc geninfo_all_blocks=1 00:12:21.266 --rc geninfo_unexecuted_blocks=1 00:12:21.266 00:12:21.266 ' 00:12:21.266 10:11:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.266 --rc genhtml_branch_coverage=1 00:12:21.266 --rc genhtml_function_coverage=1 00:12:21.266 --rc genhtml_legend=1 00:12:21.266 --rc geninfo_all_blocks=1 00:12:21.266 --rc geninfo_unexecuted_blocks=1 00:12:21.266 00:12:21.266 ' 00:12:21.266 10:11:27 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:21.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.266 --rc genhtml_branch_coverage=1 00:12:21.266 --rc genhtml_function_coverage=1 00:12:21.266 --rc genhtml_legend=1 00:12:21.266 --rc geninfo_all_blocks=1 00:12:21.266 --rc geninfo_unexecuted_blocks=1 00:12:21.266 00:12:21.266 ' 00:12:21.266 10:11:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.266 10:11:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:21.266 10:11:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.266 10:11:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.266 10:11:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.266 10:11:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.266 10:11:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.266 10:11:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.266 10:11:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:21.266 10:11:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:21.266 10:11:27 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:21.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.790 Waiting for block devices as requested 00:12:21.790 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:21.790 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.052 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:22.052 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:27.420 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:27.420 10:11:33 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:27.420 10:11:33 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:27.420 10:11:33 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:27.680 10:11:33 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:27.680 No valid GPT data, bailing 00:12:27.680 10:11:33 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:27.680 10:11:33 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:27.680 10:11:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:27.680 10:11:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.680 10:11:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.680 10:11:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.680 ************************************ 00:12:27.680 START TEST xnvme_rpc 00:12:27.680 ************************************ 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69200 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69200 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69200 ']' 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.680 10:11:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:27.680 [2024-12-06 10:11:33.798712] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:27.680 [2024-12-06 10:11:33.798834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69200 ] 00:12:27.941 [2024-12-06 10:11:33.958939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.941 [2024-12-06 10:11:34.055844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.514 xnvme_bdev 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:28.514 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69200 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69200 ']' 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69200 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69200 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.838 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.838 killing process with pid 69200 00:12:28.839 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69200' 00:12:28.839 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69200 00:12:28.839 10:11:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69200 00:12:30.226 00:12:30.226 real 0m2.621s 00:12:30.226 user 0m2.725s 00:12:30.226 sys 0m0.335s 00:12:30.226 10:11:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.226 10:11:36 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:30.226 ************************************ 00:12:30.226 END TEST xnvme_rpc 00:12:30.226 ************************************ 00:12:30.226 10:11:36 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:30.226 10:11:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:30.226 10:11:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.226 10:11:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:30.226 ************************************ 00:12:30.226 START TEST xnvme_bdevperf 00:12:30.226 ************************************ 00:12:30.226 10:11:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:30.226 10:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:30.226 10:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:30.226 10:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:30.227 10:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:30.227 10:11:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:30.227 10:11:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.227 10:11:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.487 { 00:12:30.487 "subsystems": [ 00:12:30.487 { 00:12:30.487 "subsystem": "bdev", 00:12:30.487 "config": [ 00:12:30.487 { 00:12:30.487 "params": { 00:12:30.487 "io_mechanism": "libaio", 00:12:30.487 "conserve_cpu": false, 00:12:30.487 "filename": "/dev/nvme0n1", 00:12:30.487 "name": "xnvme_bdev" 00:12:30.487 }, 00:12:30.487 "method": "bdev_xnvme_create" 00:12:30.487 }, 00:12:30.487 { 00:12:30.487 "method": "bdev_wait_for_examine" 00:12:30.487 } 00:12:30.487 ] 00:12:30.487 } 00:12:30.487 ] 00:12:30.487 } 00:12:30.487 [2024-12-06 10:11:36.440775] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:30.488 [2024-12-06 10:11:36.440886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69274 ] 00:12:30.488 [2024-12-06 10:11:36.601797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.748 [2024-12-06 10:11:36.697981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.009 Running I/O for 5 seconds... 00:12:32.891 35441.00 IOPS, 138.44 MiB/s [2024-12-06T10:11:40.003Z] 35860.50 IOPS, 140.08 MiB/s [2024-12-06T10:11:41.392Z] 35205.33 IOPS, 137.52 MiB/s [2024-12-06T10:11:41.966Z] 34279.50 IOPS, 133.90 MiB/s 00:12:35.799 Latency(us) 00:12:35.799 [2024-12-06T10:11:41.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.799 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:35.799 xnvme_bdev : 5.00 34109.51 133.24 0.00 0.00 1871.79 374.94 60898.07 00:12:35.799 [2024-12-06T10:11:41.966Z] =================================================================================================================== 00:12:35.799 [2024-12-06T10:11:41.966Z] Total : 34109.51 133.24 0.00 0.00 1871.79 374.94 60898.07 00:12:36.744 10:11:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:36.744 10:11:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:36.744 10:11:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:36.744 10:11:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:36.744 10:11:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:36.744 { 00:12:36.744 "subsystems": [ 00:12:36.744 { 00:12:36.744 "subsystem": "bdev", 00:12:36.744 "config": [ 00:12:36.744 { 00:12:36.744 "params": { 00:12:36.744 "io_mechanism": "libaio", 00:12:36.744 "conserve_cpu": false, 00:12:36.744 "filename": "/dev/nvme0n1", 00:12:36.744 "name": "xnvme_bdev" 00:12:36.744 }, 00:12:36.744 "method": "bdev_xnvme_create" 00:12:36.744 }, 00:12:36.744 { 00:12:36.744 "method": "bdev_wait_for_examine" 00:12:36.744 } 00:12:36.744 ] 00:12:36.744 } 00:12:36.744 ] 00:12:36.744 } 00:12:36.744 [2024-12-06 10:11:42.795423] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:36.744 [2024-12-06 10:11:42.795572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69344 ] 00:12:37.005 [2024-12-06 10:11:42.956821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.005 [2024-12-06 10:11:43.084162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.267 Running I/O for 5 seconds... 00:12:39.597 33541.00 IOPS, 131.02 MiB/s [2024-12-06T10:11:46.798Z] 35512.00 IOPS, 138.72 MiB/s [2024-12-06T10:11:47.741Z] 36197.67 IOPS, 141.40 MiB/s [2024-12-06T10:11:48.682Z] 36324.75 IOPS, 141.89 MiB/s 00:12:42.515 Latency(us) 00:12:42.515 [2024-12-06T10:11:48.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.516 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:42.516 xnvme_bdev : 5.00 36195.74 141.39 0.00 0.00 1763.47 294.60 11746.07 00:12:42.516 [2024-12-06T10:11:48.683Z] =================================================================================================================== 00:12:42.516 [2024-12-06T10:11:48.683Z] Total : 36195.74 141.39 0.00 0.00 1763.47 294.60 11746.07 00:12:43.085 00:12:43.085 real 0m12.832s 00:12:43.085 user 0m4.759s 00:12:43.085 sys 0m5.950s 00:12:43.085 10:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:43.085 ************************************ 00:12:43.085 END TEST xnvme_bdevperf 00:12:43.085 ************************************ 00:12:43.085 10:11:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 10:11:49 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:43.345 10:11:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:43.345 10:11:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:43.345 10:11:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 ************************************ 00:12:43.345 START TEST xnvme_fio_plugin 00:12:43.345 ************************************ 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:43.345 10:11:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.345 { 00:12:43.345 "subsystems": [ 00:12:43.345 { 00:12:43.345 "subsystem": "bdev", 00:12:43.345 "config": [ 00:12:43.345 { 00:12:43.345 "params": { 00:12:43.345 "io_mechanism": "libaio", 00:12:43.345 "conserve_cpu": false, 00:12:43.345 "filename": "/dev/nvme0n1", 00:12:43.345 "name": "xnvme_bdev" 00:12:43.345 }, 00:12:43.345 "method": "bdev_xnvme_create" 00:12:43.345 }, 00:12:43.345 { 00:12:43.345 "method": "bdev_wait_for_examine" 00:12:43.345 } 00:12:43.345 ] 00:12:43.345 } 00:12:43.345 ] 00:12:43.345 } 00:12:43.346 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:43.346 fio-3.35 00:12:43.346 Starting 1 thread 00:12:49.927 00:12:49.927 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69463: Fri Dec 6 10:11:55 2024 00:12:49.927 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(651MiB/5001msec) 00:12:49.927 slat (usec): min=4, max=2024, avg=19.15, stdev=94.33 00:12:49.927 clat (usec): min=106, max=4704, avg=1394.70, stdev=496.26 00:12:49.927 lat (usec): min=210, max=4712, avg=1413.85, stdev=486.18 00:12:49.927 clat percentiles (usec): 00:12:49.927 | 1.00th=[ 310], 5.00th=[ 603], 10.00th=[ 783], 20.00th=[ 988], 00:12:49.927 | 30.00th=[ 1139], 40.00th=[ 1270], 50.00th=[ 1385], 60.00th=[ 1516], 00:12:49.927 | 70.00th=[ 1631], 80.00th=[ 1778], 90.00th=[ 1975], 95.00th=[ 2180], 00:12:49.927 | 99.00th=[ 2802], 99.50th=[ 3097], 99.90th=[ 3720], 99.95th=[ 3982], 00:12:49.927 | 99.99th=[ 4178] 00:12:49.927 bw ( KiB/s): min=122264, max=140960, per=99.61%, avg=132747.56, stdev=6071.74, samples=9 00:12:49.927 iops : min=30566, max=35240, avg=33186.89, stdev=1517.94, samples=9 00:12:49.927 lat (usec) : 250=0.48%, 500=2.66%, 750=5.71%, 1000=11.90% 00:12:49.927 lat (msec) : 2=69.95%, 4=9.25%, 10=0.04% 00:12:49.927 cpu : usr=49.60%, sys=42.36%, ctx=16, majf=0, minf=764 00:12:49.927 IO depths : 1=0.6%, 2=1.5%, 4=3.4%, 8=8.5%, 16=22.6%, 32=61.3%, >=64=2.1% 00:12:49.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.927 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:49.927 issued rwts: total=166622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.927 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:49.927 00:12:49.927 Run status group 0 (all jobs): 00:12:49.927 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=651MiB (682MB), run=5001-5001msec 00:12:50.189 ----------------------------------------------------- 00:12:50.189 Suppressions used: 00:12:50.189 count bytes template 00:12:50.189 1 11 /usr/src/fio/parse.c 00:12:50.189 1 8 libtcmalloc_minimal.so 00:12:50.189 1 904 libcrypto.so 00:12:50.189 ----------------------------------------------------- 00:12:50.189 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:50.189 10:11:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:50.189 { 00:12:50.189 "subsystems": [ 00:12:50.189 { 00:12:50.189 "subsystem": "bdev", 00:12:50.189 "config": [ 00:12:50.189 { 00:12:50.189 "params": { 00:12:50.189 "io_mechanism": "libaio", 00:12:50.189 "conserve_cpu": false, 00:12:50.189 "filename": "/dev/nvme0n1", 00:12:50.189 "name": "xnvme_bdev" 00:12:50.189 }, 00:12:50.189 "method": "bdev_xnvme_create" 00:12:50.189 }, 00:12:50.189 { 00:12:50.189 "method": "bdev_wait_for_examine" 00:12:50.189 } 00:12:50.189 ] 00:12:50.189 } 00:12:50.189 ] 00:12:50.189 } 00:12:50.450 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:50.450 fio-3.35 00:12:50.450 Starting 1 thread 00:12:57.039 00:12:57.039 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69559: Fri Dec 6 10:12:02 2024 00:12:57.039 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5001msec); 0 zone resets 00:12:57.039 slat (usec): min=4, max=1697, avg=18.35, stdev=70.50 00:12:57.039 clat (usec): min=105, max=7259, avg=1173.77, stdev=522.20 00:12:57.039 lat (usec): min=181, max=7264, avg=1192.11, stdev=517.96 00:12:57.039 clat percentiles (usec): 00:12:57.039 | 1.00th=[ 277], 5.00th=[ 453], 10.00th=[ 586], 20.00th=[ 766], 00:12:57.039 | 30.00th=[ 889], 40.00th=[ 1004], 50.00th=[ 1106], 60.00th=[ 1221], 00:12:57.039 | 70.00th=[ 1352], 80.00th=[ 1532], 90.00th=[ 1795], 95.00th=[ 2073], 00:12:57.039 | 99.00th=[ 2933], 99.50th=[ 3326], 99.90th=[ 4228], 99.95th=[ 4490], 00:12:57.039 | 99.99th=[ 5211] 00:12:57.039 bw ( KiB/s): min=139408, max=165408, per=100.00%, avg=153200.89, stdev=8776.95, samples=9 00:12:57.039 iops : min=34852, max=41352, avg=38300.22, stdev=2194.24, samples=9 00:12:57.039 lat (usec) : 250=0.69%, 500=5.88%, 750=12.45%, 1000=20.99% 00:12:57.039 lat (msec) : 2=54.06%, 4=5.80%, 10=0.13% 00:12:57.039 cpu : usr=42.78%, sys=44.50%, ctx=34, majf=0, minf=765 00:12:57.039 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=8.0%, 16=22.9%, 32=62.8%, >=64=2.2% 00:12:57.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.039 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:57.039 issued rwts: total=0,191495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.039 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.039 00:12:57.039 Run status group 0 (all jobs): 00:12:57.039 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (784MB), run=5001-5001msec 00:12:57.039 ----------------------------------------------------- 00:12:57.039 Suppressions used: 00:12:57.039 count bytes template 00:12:57.039 1 11 /usr/src/fio/parse.c 00:12:57.039 1 8 libtcmalloc_minimal.so 00:12:57.039 1 904 libcrypto.so 00:12:57.039 ----------------------------------------------------- 00:12:57.039 00:12:57.039 00:12:57.039 real 0m13.879s 00:12:57.039 user 0m7.506s 00:12:57.039 sys 0m4.941s 00:12:57.039 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.039 ************************************ 00:12:57.039 END TEST xnvme_fio_plugin 00:12:57.039 10:12:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.039 ************************************ 00:12:57.301 10:12:03 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:57.301 10:12:03 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:57.301 10:12:03 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:57.301 10:12:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:57.301 10:12:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.301 10:12:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.301 10:12:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.301 ************************************ 00:12:57.301 START TEST xnvme_rpc 00:12:57.301 ************************************ 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69641 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69641 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69641 ']' 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.301 10:12:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.301 [2024-12-06 10:12:03.316501] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:12:57.301 [2024-12-06 10:12:03.316668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69641 ] 00:12:57.563 [2024-12-06 10:12:03.486846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.563 [2024-12-06 10:12:03.620639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 xnvme_bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69641 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69641 ']' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69641 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69641 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.508 killing process with pid 69641 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69641' 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69641 00:12:58.508 10:12:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69641 00:13:00.503 00:13:00.503 real 0m2.969s 00:13:00.503 user 0m2.960s 00:13:00.503 sys 0m0.475s 00:13:00.503 10:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.503 ************************************ 00:13:00.503 END TEST xnvme_rpc 00:13:00.503 ************************************ 00:13:00.503 10:12:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 10:12:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:00.503 10:12:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.503 10:12:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.503 10:12:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 ************************************ 00:13:00.503 START TEST xnvme_bdevperf 00:13:00.503 ************************************ 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:00.503 10:12:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 { 00:13:00.503 "subsystems": [ 00:13:00.503 { 00:13:00.503 "subsystem": "bdev", 00:13:00.503 "config": [ 00:13:00.503 { 00:13:00.503 "params": { 00:13:00.503 "io_mechanism": "libaio", 00:13:00.503 "conserve_cpu": true, 00:13:00.503 "filename": "/dev/nvme0n1", 00:13:00.503 "name": "xnvme_bdev" 00:13:00.503 }, 00:13:00.503 "method": "bdev_xnvme_create" 00:13:00.503 }, 00:13:00.503 { 00:13:00.503 "method": "bdev_wait_for_examine" 00:13:00.503 } 00:13:00.503 ] 00:13:00.503 } 00:13:00.503 ] 00:13:00.503 } 00:13:00.503 [2024-12-06 10:12:06.340562] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:00.503 [2024-12-06 10:12:06.340713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69715 ] 00:13:00.503 [2024-12-06 10:12:06.505860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.503 [2024-12-06 10:12:06.634763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.072 Running I/O for 5 seconds... 00:13:02.956 32619.00 IOPS, 127.42 MiB/s [2024-12-06T10:12:10.068Z] 32152.00 IOPS, 125.59 MiB/s [2024-12-06T10:12:11.004Z] 31891.67 IOPS, 124.58 MiB/s [2024-12-06T10:12:12.376Z] 33115.75 IOPS, 129.36 MiB/s 00:13:06.209 Latency(us) 00:13:06.209 [2024-12-06T10:12:12.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.209 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:06.209 xnvme_bdev : 5.00 33806.82 132.06 0.00 0.00 1888.52 368.64 8822.15 00:13:06.209 [2024-12-06T10:12:12.376Z] =================================================================================================================== 00:13:06.209 [2024-12-06T10:12:12.376Z] Total : 33806.82 132.06 0.00 0.00 1888.52 368.64 8822.15 00:13:06.775 10:12:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:06.775 10:12:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:06.775 10:12:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:06.775 10:12:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:06.775 10:12:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:06.775 { 00:13:06.775 "subsystems": [ 00:13:06.775 { 00:13:06.775 "subsystem": "bdev", 00:13:06.775 "config": [ 00:13:06.775 { 00:13:06.775 "params": { 00:13:06.775 "io_mechanism": "libaio", 00:13:06.775 "conserve_cpu": true, 00:13:06.775 "filename": "/dev/nvme0n1", 00:13:06.775 "name": "xnvme_bdev" 00:13:06.775 }, 00:13:06.775 "method": "bdev_xnvme_create" 00:13:06.775 }, 00:13:06.775 { 00:13:06.775 "method": "bdev_wait_for_examine" 00:13:06.775 } 00:13:06.775 ] 00:13:06.775 } 00:13:06.775 ] 00:13:06.775 } 00:13:06.775 [2024-12-06 10:12:12.760750] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:06.775 [2024-12-06 10:12:12.760862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69790 ] 00:13:06.775 [2024-12-06 10:12:12.922318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.032 [2024-12-06 10:12:13.023633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.290 Running I/O for 5 seconds... 00:13:09.164 12741.00 IOPS, 49.77 MiB/s [2024-12-06T10:12:16.709Z] 7819.00 IOPS, 30.54 MiB/s [2024-12-06T10:12:17.334Z] 6243.00 IOPS, 24.39 MiB/s [2024-12-06T10:12:18.730Z] 6217.00 IOPS, 24.29 MiB/s [2024-12-06T10:12:18.731Z] 7614.60 IOPS, 29.74 MiB/s 00:13:12.564 Latency(us) 00:13:12.564 [2024-12-06T10:12:18.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.564 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:12.564 xnvme_bdev : 5.01 7626.66 29.79 0.00 0.00 8386.90 60.65 438788.73 00:13:12.564 [2024-12-06T10:12:18.731Z] =================================================================================================================== 00:13:12.564 [2024-12-06T10:12:18.731Z] Total : 7626.66 29.79 0.00 0.00 8386.90 60.65 438788.73 00:13:13.131 00:13:13.131 real 0m12.792s 00:13:13.131 user 0m7.920s 00:13:13.131 sys 0m3.765s 00:13:13.131 10:12:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.131 ************************************ 00:13:13.131 END TEST xnvme_bdevperf 00:13:13.131 10:12:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.131 ************************************ 00:13:13.131 10:12:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:13.131 10:12:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.131 10:12:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.131 10:12:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.131 ************************************ 00:13:13.131 START TEST xnvme_fio_plugin 00:13:13.131 ************************************ 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:13.131 10:12:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.131 { 00:13:13.131 "subsystems": [ 00:13:13.131 { 00:13:13.131 "subsystem": "bdev", 00:13:13.131 "config": [ 00:13:13.131 { 00:13:13.131 "params": { 00:13:13.131 "io_mechanism": "libaio", 00:13:13.131 "conserve_cpu": true, 00:13:13.131 "filename": "/dev/nvme0n1", 00:13:13.131 "name": "xnvme_bdev" 00:13:13.131 }, 00:13:13.131 "method": "bdev_xnvme_create" 00:13:13.131 }, 00:13:13.131 { 00:13:13.131 "method": "bdev_wait_for_examine" 00:13:13.131 } 00:13:13.131 ] 00:13:13.131 } 00:13:13.131 ] 00:13:13.131 } 00:13:13.393 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:13.393 fio-3.35 00:13:13.393 Starting 1 thread 00:13:19.957 00:13:19.957 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69904: Fri Dec 6 10:12:25 2024 00:13:19.957 read: IOPS=34.4k, BW=134MiB/s (141MB/s)(695MiB/5174msec) 00:13:19.957 slat (usec): min=4, max=2385, avg=21.03, stdev=83.99 00:13:19.957 clat (usec): min=48, max=343225, avg=1280.91, stdev=3861.83 00:13:19.957 lat (usec): min=162, max=343230, avg=1301.94, stdev=3861.06 00:13:19.957 clat percentiles (usec): 00:13:19.957 | 1.00th=[ 235], 5.00th=[ 420], 10.00th=[ 570], 20.00th=[ 734], 00:13:19.957 | 30.00th=[ 889], 40.00th=[ 1029], 50.00th=[ 1156], 60.00th=[ 1287], 00:13:19.957 | 70.00th=[ 1434], 80.00th=[ 1614], 90.00th=[ 1893], 95.00th=[ 2180], 00:13:19.957 | 99.00th=[ 2900], 99.50th=[ 3163], 99.90th=[ 3851], 99.95th=[ 5669], 00:13:19.957 | 99.99th=[191890] 00:13:19.957 bw ( KiB/s): min=125560, max=170448, per=100.00%, avg=142282.40, stdev=12114.96, samples=10 00:13:19.957 iops : min=31390, max=42612, avg=35570.60, stdev=3028.74, samples=10 00:13:19.957 lat (usec) : 50=0.01%, 100=0.01%, 250=1.26%, 500=6.05%, 750=13.72% 00:13:19.957 lat (usec) : 1000=16.94% 00:13:19.957 lat (msec) : 2=54.19%, 4=7.75%, 10=0.03%, 20=0.01%, 50=0.01% 00:13:19.957 lat (msec) : 100=0.01%, 250=0.03%, 500=0.01% 00:13:19.957 cpu : usr=41.64%, sys=50.14%, ctx=8, majf=0, minf=764 00:13:19.957 IO depths : 1=0.4%, 2=1.0%, 4=3.0%, 8=8.6%, 16=23.5%, 32=61.3%, >=64=2.1% 00:13:19.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.957 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:19.957 issued rwts: total=177903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.957 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:19.957 00:13:19.957 Run status group 0 (all jobs): 00:13:19.957 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=695MiB (729MB), run=5174-5174msec 00:13:20.216 ----------------------------------------------------- 00:13:20.216 Suppressions used: 00:13:20.216 count bytes template 00:13:20.216 1 11 /usr/src/fio/parse.c 00:13:20.216 1 8 libtcmalloc_minimal.so 00:13:20.216 1 904 libcrypto.so 00:13:20.216 ----------------------------------------------------- 00:13:20.216 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.217 10:12:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.217 { 00:13:20.217 "subsystems": [ 00:13:20.217 { 00:13:20.217 "subsystem": "bdev", 00:13:20.217 "config": [ 00:13:20.217 { 00:13:20.217 "params": { 00:13:20.217 "io_mechanism": "libaio", 00:13:20.217 "conserve_cpu": true, 00:13:20.217 "filename": "/dev/nvme0n1", 00:13:20.217 "name": "xnvme_bdev" 00:13:20.217 }, 00:13:20.217 "method": "bdev_xnvme_create" 00:13:20.217 }, 00:13:20.217 { 00:13:20.217 "method": "bdev_wait_for_examine" 00:13:20.217 } 00:13:20.217 ] 00:13:20.217 } 00:13:20.217 ] 00:13:20.217 } 00:13:20.217 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:20.217 fio-3.35 00:13:20.217 Starting 1 thread 00:13:26.771 00:13:26.771 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70001: Fri Dec 6 10:12:31 2024 00:13:26.771 write: IOPS=36.1k, BW=141MiB/s (148MB/s)(706MiB/5001msec); 0 zone resets 00:13:26.771 slat (usec): min=4, max=1959, avg=21.40, stdev=80.92 00:13:26.771 clat (usec): min=105, max=7513, avg=1185.17, stdev=552.85 00:13:26.771 lat (usec): min=159, max=7518, avg=1206.56, stdev=548.91 00:13:26.771 clat percentiles (usec): 00:13:26.771 | 1.00th=[ 239], 5.00th=[ 404], 10.00th=[ 553], 20.00th=[ 725], 00:13:26.771 | 30.00th=[ 873], 40.00th=[ 1004], 50.00th=[ 1123], 60.00th=[ 1254], 00:13:26.771 | 70.00th=[ 1401], 80.00th=[ 1582], 90.00th=[ 1876], 95.00th=[ 2180], 00:13:26.771 | 99.00th=[ 2900], 99.50th=[ 3163], 99.90th=[ 3752], 99.95th=[ 4113], 00:13:26.771 | 99.99th=[ 6587] 00:13:26.771 bw ( KiB/s): min=140472, max=146576, per=99.49%, avg=143851.56, stdev=1844.45, samples=9 00:13:26.771 iops : min=35118, max=36644, avg=35962.89, stdev=461.11, samples=9 00:13:26.771 lat (usec) : 250=1.23%, 500=6.64%, 750=13.82%, 1000=18.31% 00:13:26.771 lat (msec) : 2=52.57%, 4=7.38%, 10=0.06% 00:13:26.771 cpu : usr=37.34%, sys=53.64%, ctx=37, majf=0, minf=765 00:13:26.771 IO depths : 1=0.3%, 2=1.0%, 4=3.0%, 8=8.8%, 16=23.9%, 32=60.9%, >=64=2.0% 00:13:26.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.771 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:26.771 issued rwts: total=0,180778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.771 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:26.771 00:13:26.771 Run status group 0 (all jobs): 00:13:26.771 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=706MiB (740MB), run=5001-5001msec 00:13:26.771 ----------------------------------------------------- 00:13:26.771 Suppressions used: 00:13:26.771 count bytes template 00:13:26.771 1 11 /usr/src/fio/parse.c 00:13:26.771 1 8 libtcmalloc_minimal.so 00:13:26.771 1 904 libcrypto.so 00:13:26.771 ----------------------------------------------------- 00:13:26.771 00:13:27.030 00:13:27.030 real 0m13.822s 00:13:27.030 user 0m6.709s 00:13:27.030 sys 0m5.840s 00:13:27.030 10:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.030 ************************************ 00:13:27.030 END TEST xnvme_fio_plugin 00:13:27.030 ************************************ 00:13:27.030 10:12:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:27.030 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:27.031 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:27.031 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:27.031 10:12:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:27.031 10:12:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.031 10:12:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.031 10:12:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.031 ************************************ 00:13:27.031 START TEST xnvme_rpc 00:13:27.031 ************************************ 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70082 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70082 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70082 ']' 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.031 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.031 [2024-12-06 10:12:33.084964] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:27.031 [2024-12-06 10:12:33.085078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70082 ] 00:13:27.290 [2024-12-06 10:12:33.243142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.290 [2024-12-06 10:12:33.342466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 xnvme_bdev 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:27.857 10:12:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.857 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.857 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:27.858 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70082 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70082 ']' 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70082 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70082 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.116 killing process with pid 70082 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70082' 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70082 00:13:28.116 10:12:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70082 00:13:29.490 00:13:29.490 real 0m2.600s 00:13:29.490 user 0m2.687s 00:13:29.490 sys 0m0.356s 00:13:29.490 10:12:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.490 ************************************ 00:13:29.490 END TEST xnvme_rpc 00:13:29.490 ************************************ 00:13:29.490 10:12:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.747 10:12:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:29.747 10:12:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.747 10:12:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.747 10:12:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.747 ************************************ 00:13:29.747 START TEST xnvme_bdevperf 00:13:29.747 ************************************ 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.747 10:12:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.747 { 00:13:29.747 "subsystems": [ 00:13:29.747 { 00:13:29.747 "subsystem": "bdev", 00:13:29.747 "config": [ 00:13:29.747 { 00:13:29.747 "params": { 00:13:29.747 "io_mechanism": "io_uring", 00:13:29.747 "conserve_cpu": false, 00:13:29.747 "filename": "/dev/nvme0n1", 00:13:29.747 "name": "xnvme_bdev" 00:13:29.747 }, 00:13:29.747 "method": "bdev_xnvme_create" 00:13:29.747 }, 00:13:29.747 { 00:13:29.747 "method": "bdev_wait_for_examine" 00:13:29.747 } 00:13:29.747 ] 00:13:29.747 } 00:13:29.747 ] 00:13:29.747 } 00:13:29.747 [2024-12-06 10:12:35.731942] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:29.747 [2024-12-06 10:12:35.732058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70150 ] 00:13:29.747 [2024-12-06 10:12:35.893686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.004 [2024-12-06 10:12:35.995128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.260 Running I/O for 5 seconds... 00:13:32.162 45935.00 IOPS, 179.43 MiB/s [2024-12-06T10:12:39.264Z] 46563.00 IOPS, 181.89 MiB/s [2024-12-06T10:12:40.638Z] 43871.67 IOPS, 171.37 MiB/s [2024-12-06T10:12:41.569Z] 42585.50 IOPS, 166.35 MiB/s [2024-12-06T10:12:41.569Z] 42313.40 IOPS, 165.29 MiB/s 00:13:35.402 Latency(us) 00:13:35.402 [2024-12-06T10:12:41.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.402 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:35.402 xnvme_bdev : 5.01 42259.11 165.07 0.00 0.00 1508.83 57.50 79853.10 00:13:35.402 [2024-12-06T10:12:41.569Z] =================================================================================================================== 00:13:35.402 [2024-12-06T10:12:41.569Z] Total : 42259.11 165.07 0.00 0.00 1508.83 57.50 79853.10 00:13:35.969 10:12:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.969 10:12:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:35.969 10:12:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.969 10:12:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.969 10:12:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.969 { 00:13:35.969 "subsystems": [ 00:13:35.969 { 00:13:35.969 "subsystem": "bdev", 00:13:35.969 "config": [ 00:13:35.969 { 00:13:35.969 "params": { 00:13:35.969 "io_mechanism": "io_uring", 00:13:35.969 "conserve_cpu": false, 00:13:35.969 "filename": "/dev/nvme0n1", 00:13:35.969 "name": "xnvme_bdev" 00:13:35.969 }, 00:13:35.969 "method": "bdev_xnvme_create" 00:13:35.969 }, 00:13:35.969 { 00:13:35.969 "method": "bdev_wait_for_examine" 00:13:35.969 } 00:13:35.969 ] 00:13:35.969 } 00:13:35.969 ] 00:13:35.969 } 00:13:35.969 [2024-12-06 10:12:42.017825] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:35.969 [2024-12-06 10:12:42.017913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70231 ] 00:13:36.227 [2024-12-06 10:12:42.174085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.227 [2024-12-06 10:12:42.270663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.485 Running I/O for 5 seconds... 00:13:38.790 5954.00 IOPS, 23.26 MiB/s [2024-12-06T10:12:45.527Z] 5954.50 IOPS, 23.26 MiB/s [2024-12-06T10:12:46.899Z] 6112.67 IOPS, 23.88 MiB/s [2024-12-06T10:12:47.832Z] 6325.25 IOPS, 24.71 MiB/s [2024-12-06T10:12:47.832Z] 6421.80 IOPS, 25.09 MiB/s 00:13:41.665 Latency(us) 00:13:41.665 [2024-12-06T10:12:47.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.665 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:41.665 xnvme_bdev : 5.01 6416.68 25.07 0.00 0.00 9956.56 43.91 42144.69 00:13:41.665 [2024-12-06T10:12:47.832Z] =================================================================================================================== 00:13:41.665 [2024-12-06T10:12:47.832Z] Total : 6416.68 25.07 0.00 0.00 9956.56 43.91 42144.69 00:13:42.229 ************************************ 00:13:42.229 END TEST xnvme_bdevperf 00:13:42.229 ************************************ 00:13:42.229 00:13:42.229 real 0m12.584s 00:13:42.229 user 0m5.702s 00:13:42.229 sys 0m6.635s 00:13:42.229 10:12:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.229 10:12:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.229 10:12:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:42.229 10:12:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.229 10:12:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.229 10:12:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.229 ************************************ 00:13:42.229 START TEST xnvme_fio_plugin 00:13:42.229 ************************************ 00:13:42.229 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:42.229 10:12:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.230 10:12:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.230 { 00:13:42.230 "subsystems": [ 00:13:42.230 { 00:13:42.230 "subsystem": "bdev", 00:13:42.230 "config": [ 00:13:42.230 { 00:13:42.230 "params": { 00:13:42.230 "io_mechanism": "io_uring", 00:13:42.230 "conserve_cpu": false, 00:13:42.230 "filename": "/dev/nvme0n1", 00:13:42.230 "name": "xnvme_bdev" 00:13:42.230 }, 00:13:42.230 "method": "bdev_xnvme_create" 00:13:42.230 }, 00:13:42.230 { 00:13:42.230 "method": "bdev_wait_for_examine" 00:13:42.230 } 00:13:42.230 ] 00:13:42.230 } 00:13:42.230 ] 00:13:42.230 } 00:13:42.486 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:42.486 fio-3.35 00:13:42.486 Starting 1 thread 00:13:49.039 00:13:49.039 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70339: Fri Dec 6 10:12:54 2024 00:13:49.039 read: IOPS=36.3k, BW=142MiB/s (149MB/s)(710MiB/5002msec) 00:13:49.039 slat (nsec): min=2849, max=70925, avg=3891.94, stdev=2337.98 00:13:49.039 clat (usec): min=785, max=7177, avg=1605.17, stdev=321.43 00:13:49.039 lat (usec): min=788, max=7180, avg=1609.06, stdev=321.86 00:13:49.039 clat percentiles (usec): 00:13:49.039 | 1.00th=[ 971], 5.00th=[ 1123], 10.00th=[ 1205], 20.00th=[ 1319], 00:13:49.039 | 30.00th=[ 1418], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:13:49.039 | 70.00th=[ 1762], 80.00th=[ 1860], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:49.039 | 99.00th=[ 2474], 99.50th=[ 2671], 99.90th=[ 3097], 99.95th=[ 3195], 00:13:49.039 | 99.99th=[ 3490] 00:13:49.039 bw ( KiB/s): min=142336, max=150016, per=100.00%, avg=145749.33, stdev=3124.88, samples=9 00:13:49.039 iops : min=35584, max=37504, avg=36437.33, stdev=781.22, samples=9 00:13:49.039 lat (usec) : 1000=1.44% 00:13:49.039 lat (msec) : 2=88.32%, 4=10.24%, 10=0.01% 00:13:49.039 cpu : usr=31.55%, sys=67.27%, ctx=8, majf=0, minf=762 00:13:49.039 IO depths : 1=1.5%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:49.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.039 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:49.039 issued rwts: total=181727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.039 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:49.039 00:13:49.039 Run status group 0 (all jobs): 00:13:49.039 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=710MiB (744MB), run=5002-5002msec 00:13:49.039 ----------------------------------------------------- 00:13:49.039 Suppressions used: 00:13:49.039 count bytes template 00:13:49.039 1 11 /usr/src/fio/parse.c 00:13:49.039 1 8 libtcmalloc_minimal.so 00:13:49.039 1 904 libcrypto.so 00:13:49.039 ----------------------------------------------------- 00:13:49.039 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:49.039 10:12:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.039 { 00:13:49.039 "subsystems": [ 00:13:49.039 { 00:13:49.039 "subsystem": "bdev", 00:13:49.039 "config": [ 00:13:49.039 { 00:13:49.039 "params": { 00:13:49.039 "io_mechanism": "io_uring", 00:13:49.039 "conserve_cpu": false, 00:13:49.039 "filename": "/dev/nvme0n1", 00:13:49.039 "name": "xnvme_bdev" 00:13:49.039 }, 00:13:49.039 "method": "bdev_xnvme_create" 00:13:49.039 }, 00:13:49.039 { 00:13:49.039 "method": "bdev_wait_for_examine" 00:13:49.039 } 00:13:49.039 ] 00:13:49.039 } 00:13:49.039 ] 00:13:49.039 } 00:13:49.297 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.297 fio-3.35 00:13:49.297 Starting 1 thread 00:13:55.855 00:13:55.855 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70431: Fri Dec 6 10:13:00 2024 00:13:55.855 write: IOPS=33.7k, BW=132MiB/s (138MB/s)(659MiB/5001msec); 0 zone resets 00:13:55.855 slat (nsec): min=2890, max=61292, avg=4284.58, stdev=2513.00 00:13:55.855 clat (usec): min=96, max=392162, avg=1725.73, stdev=7533.21 00:13:55.855 lat (usec): min=103, max=392175, avg=1730.01, stdev=7533.23 00:13:55.855 clat percentiles (usec): 00:13:55.855 | 1.00th=[ 938], 5.00th=[ 1090], 10.00th=[ 1172], 20.00th=[ 1287], 00:13:55.855 | 30.00th=[ 1385], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1631], 00:13:55.855 | 70.00th=[ 1729], 80.00th=[ 1844], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:55.855 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3326], 99.95th=[ 5669], 00:13:55.855 | 99.99th=[392168] 00:13:55.855 bw ( KiB/s): min=65128, max=146440, per=98.95%, avg=133556.44, stdev=26419.29, samples=9 00:13:55.855 iops : min=16282, max=36610, avg=33389.11, stdev=6604.82, samples=9 00:13:55.855 lat (usec) : 100=0.01%, 250=0.01%, 500=0.04%, 750=0.08%, 1000=2.07% 00:13:55.855 lat (msec) : 2=87.81%, 4=9.92%, 10=0.03%, 50=0.01%, 500=0.04% 00:13:55.855 cpu : usr=32.18%, sys=66.68%, ctx=12, majf=0, minf=763 00:13:55.855 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:55.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.855 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:55.855 issued rwts: total=0,168746,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.855 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:55.855 00:13:55.855 Run status group 0 (all jobs): 00:13:55.856 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=659MiB (691MB), run=5001-5001msec 00:13:55.856 ----------------------------------------------------- 00:13:55.856 Suppressions used: 00:13:55.856 count bytes template 00:13:55.856 1 11 /usr/src/fio/parse.c 00:13:55.856 1 8 libtcmalloc_minimal.so 00:13:55.856 1 904 libcrypto.so 00:13:55.856 ----------------------------------------------------- 00:13:55.856 00:13:55.856 00:13:55.856 real 0m13.482s 00:13:55.856 user 0m5.877s 00:13:55.856 sys 0m7.162s 00:13:55.856 10:13:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.856 ************************************ 00:13:55.856 END TEST xnvme_fio_plugin 00:13:55.856 ************************************ 00:13:55.856 10:13:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:55.856 10:13:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:55.856 10:13:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:55.856 10:13:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:55.856 10:13:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:55.856 10:13:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:55.856 10:13:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.856 10:13:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.856 ************************************ 00:13:55.856 START TEST xnvme_rpc 00:13:55.856 ************************************ 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:55.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70517 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70517 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70517 ']' 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.856 10:13:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.856 [2024-12-06 10:13:01.972013] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:55.856 [2024-12-06 10:13:01.972397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70517 ] 00:13:56.114 [2024-12-06 10:13:02.150552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.114 [2024-12-06 10:13:02.251538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 xnvme_bdev 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 10:13:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70517 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70517 ']' 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70517 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70517 00:13:57.048 killing process with pid 70517 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70517' 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70517 00:13:57.048 10:13:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70517 00:13:58.424 00:13:58.424 real 0m2.650s 00:13:58.424 user 0m2.732s 00:13:58.424 sys 0m0.372s 00:13:58.424 10:13:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.424 ************************************ 00:13:58.424 END TEST xnvme_rpc 00:13:58.424 ************************************ 00:13:58.424 10:13:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.424 10:13:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:58.424 10:13:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:58.424 10:13:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.424 10:13:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.424 ************************************ 00:13:58.424 START TEST xnvme_bdevperf 00:13:58.424 ************************************ 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:58.424 10:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:58.682 { 00:13:58.682 "subsystems": [ 00:13:58.682 { 00:13:58.682 "subsystem": "bdev", 00:13:58.682 "config": [ 00:13:58.682 { 00:13:58.682 "params": { 00:13:58.682 "io_mechanism": "io_uring", 00:13:58.682 "conserve_cpu": true, 00:13:58.682 "filename": "/dev/nvme0n1", 00:13:58.682 "name": "xnvme_bdev" 00:13:58.682 }, 00:13:58.682 "method": "bdev_xnvme_create" 00:13:58.682 }, 00:13:58.682 { 00:13:58.682 "method": "bdev_wait_for_examine" 00:13:58.682 } 00:13:58.682 ] 00:13:58.682 } 00:13:58.682 ] 00:13:58.682 } 00:13:58.682 [2024-12-06 10:13:04.645808] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:13:58.682 [2024-12-06 10:13:04.645929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70586 ] 00:13:58.682 [2024-12-06 10:13:04.804191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.940 [2024-12-06 10:13:04.903418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.196 Running I/O for 5 seconds... 00:14:01.106 35237.00 IOPS, 137.64 MiB/s [2024-12-06T10:13:08.203Z] 36533.00 IOPS, 142.71 MiB/s [2024-12-06T10:13:09.575Z] 37282.00 IOPS, 145.63 MiB/s [2024-12-06T10:13:10.507Z] 37593.75 IOPS, 146.85 MiB/s [2024-12-06T10:13:10.507Z] 37746.00 IOPS, 147.45 MiB/s 00:14:04.340 Latency(us) 00:14:04.340 [2024-12-06T10:13:10.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.340 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:04.340 xnvme_bdev : 5.01 37718.75 147.34 0.00 0.00 1692.36 171.72 22786.36 00:14:04.340 [2024-12-06T10:13:10.507Z] =================================================================================================================== 00:14:04.340 [2024-12-06T10:13:10.507Z] Total : 37718.75 147.34 0.00 0.00 1692.36 171.72 22786.36 00:14:04.906 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.906 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:04.906 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:04.906 10:13:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.906 10:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.906 { 00:14:04.906 "subsystems": [ 00:14:04.906 { 00:14:04.906 "subsystem": "bdev", 00:14:04.906 "config": [ 00:14:04.906 { 00:14:04.906 "params": { 00:14:04.906 "io_mechanism": "io_uring", 00:14:04.906 "conserve_cpu": true, 00:14:04.906 "filename": "/dev/nvme0n1", 00:14:04.906 "name": "xnvme_bdev" 00:14:04.906 }, 00:14:04.906 "method": "bdev_xnvme_create" 00:14:04.906 }, 00:14:04.906 { 00:14:04.906 "method": "bdev_wait_for_examine" 00:14:04.906 } 00:14:04.906 ] 00:14:04.906 } 00:14:04.906 ] 00:14:04.906 } 00:14:04.906 [2024-12-06 10:13:10.945545] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:04.907 [2024-12-06 10:13:10.945664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70661 ] 00:14:05.165 [2024-12-06 10:13:11.106849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.165 [2024-12-06 10:13:11.207402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.423 Running I/O for 5 seconds... 00:14:07.730 8111.00 IOPS, 31.68 MiB/s [2024-12-06T10:13:14.463Z] 7814.50 IOPS, 30.53 MiB/s [2024-12-06T10:13:15.835Z] 7798.00 IOPS, 30.46 MiB/s [2024-12-06T10:13:16.768Z] 7854.25 IOPS, 30.68 MiB/s [2024-12-06T10:13:16.768Z] 7818.60 IOPS, 30.54 MiB/s 00:14:10.601 Latency(us) 00:14:10.601 [2024-12-06T10:13:16.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.601 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:10.601 xnvme_bdev : 5.01 7809.99 30.51 0.00 0.00 8179.92 54.74 107277.39 00:14:10.601 [2024-12-06T10:13:16.768Z] =================================================================================================================== 00:14:10.601 [2024-12-06T10:13:16.768Z] Total : 7809.99 30.51 0.00 0.00 8179.92 54.74 107277.39 00:14:11.230 00:14:11.230 real 0m12.613s 00:14:11.230 user 0m9.655s 00:14:11.230 sys 0m2.069s 00:14:11.230 10:13:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.230 ************************************ 00:14:11.230 END TEST xnvme_bdevperf 00:14:11.230 ************************************ 00:14:11.230 10:13:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.230 10:13:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:11.230 10:13:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:11.230 10:13:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.230 10:13:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.230 ************************************ 00:14:11.230 START TEST xnvme_fio_plugin 00:14:11.230 ************************************ 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:11.230 10:13:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:11.230 { 00:14:11.230 "subsystems": [ 00:14:11.230 { 00:14:11.230 "subsystem": "bdev", 00:14:11.230 "config": [ 00:14:11.230 { 00:14:11.230 "params": { 00:14:11.230 "io_mechanism": "io_uring", 00:14:11.230 "conserve_cpu": true, 00:14:11.230 "filename": "/dev/nvme0n1", 00:14:11.230 "name": "xnvme_bdev" 00:14:11.230 }, 00:14:11.230 "method": "bdev_xnvme_create" 00:14:11.230 }, 00:14:11.230 { 00:14:11.230 "method": "bdev_wait_for_examine" 00:14:11.230 } 00:14:11.230 ] 00:14:11.230 } 00:14:11.230 ] 00:14:11.230 } 00:14:11.488 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:11.488 fio-3.35 00:14:11.488 Starting 1 thread 00:14:18.061 00:14:18.061 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70780: Fri Dec 6 10:13:23 2024 00:14:18.061 read: IOPS=38.3k, BW=150MiB/s (157MB/s)(749MiB/5001msec) 00:14:18.061 slat (usec): min=2, max=238, avg= 3.78, stdev= 2.62 00:14:18.061 clat (usec): min=729, max=8949, avg=1520.03, stdev=317.69 00:14:18.061 lat (usec): min=774, max=8951, avg=1523.81, stdev=318.25 00:14:18.061 clat percentiles (usec): 00:14:18.061 | 1.00th=[ 971], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:14:18.062 | 30.00th=[ 1336], 40.00th=[ 1418], 50.00th=[ 1483], 60.00th=[ 1549], 00:14:18.062 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1926], 95.00th=[ 2089], 00:14:18.062 | 99.00th=[ 2474], 99.50th=[ 2671], 99.90th=[ 3032], 99.95th=[ 3425], 00:14:18.062 | 99.99th=[ 3687] 00:14:18.062 bw ( KiB/s): min=145920, max=160256, per=99.87%, avg=153144.89, stdev=5037.38, samples=9 00:14:18.062 iops : min=36480, max=40064, avg=38286.22, stdev=1259.34, samples=9 00:14:18.062 lat (usec) : 750=0.01%, 1000=1.61% 00:14:18.062 lat (msec) : 2=90.82%, 4=7.57%, 10=0.01% 00:14:18.062 cpu : usr=58.78%, sys=37.36%, ctx=40, majf=0, minf=762 00:14:18.062 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:18.062 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:18.062 issued rwts: total=191725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:18.062 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:18.062 00:14:18.062 Run status group 0 (all jobs): 00:14:18.062 READ: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=749MiB (785MB), run=5001-5001msec 00:14:18.062 ----------------------------------------------------- 00:14:18.062 Suppressions used: 00:14:18.062 count bytes template 00:14:18.062 1 11 /usr/src/fio/parse.c 00:14:18.062 1 8 libtcmalloc_minimal.so 00:14:18.062 1 904 libcrypto.so 00:14:18.062 ----------------------------------------------------- 00:14:18.062 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.062 10:13:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.062 { 00:14:18.062 "subsystems": [ 00:14:18.062 { 00:14:18.062 "subsystem": "bdev", 00:14:18.062 "config": [ 00:14:18.062 { 00:14:18.062 "params": { 00:14:18.062 "io_mechanism": "io_uring", 00:14:18.062 "conserve_cpu": true, 00:14:18.062 "filename": "/dev/nvme0n1", 00:14:18.062 "name": "xnvme_bdev" 00:14:18.062 }, 00:14:18.062 "method": "bdev_xnvme_create" 00:14:18.062 }, 00:14:18.062 { 00:14:18.062 "method": "bdev_wait_for_examine" 00:14:18.062 } 00:14:18.062 ] 00:14:18.062 } 00:14:18.062 ] 00:14:18.062 } 00:14:18.062 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:18.062 fio-3.35 00:14:18.062 Starting 1 thread 00:14:24.618 00:14:24.618 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70866: Fri Dec 6 10:13:29 2024 00:14:24.618 write: IOPS=38.3k, BW=150MiB/s (157MB/s)(748MiB/5001msec); 0 zone resets 00:14:24.618 slat (nsec): min=2876, max=62798, avg=4085.94, stdev=2316.89 00:14:24.618 clat (usec): min=812, max=4189, avg=1507.17, stdev=304.24 00:14:24.618 lat (usec): min=816, max=4198, avg=1511.25, stdev=304.98 00:14:24.618 clat percentiles (usec): 00:14:24.618 | 1.00th=[ 979], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:14:24.618 | 30.00th=[ 1336], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1549], 00:14:24.618 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1909], 95.00th=[ 2089], 00:14:24.618 | 99.00th=[ 2409], 99.50th=[ 2540], 99.90th=[ 2933], 99.95th=[ 3064], 00:14:24.618 | 99.99th=[ 3589] 00:14:24.618 bw ( KiB/s): min=143848, max=158160, per=99.92%, avg=153098.67, stdev=4451.16, samples=9 00:14:24.618 iops : min=35962, max=39540, avg=38274.67, stdev=1112.79, samples=9 00:14:24.618 lat (usec) : 1000=1.51% 00:14:24.618 lat (msec) : 2=91.55%, 4=6.95%, 10=0.01% 00:14:24.618 cpu : usr=57.38%, sys=39.12%, ctx=16, majf=0, minf=763 00:14:24.618 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:24.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.618 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:24.618 issued rwts: total=0,191564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:24.618 00:14:24.618 Run status group 0 (all jobs): 00:14:24.618 WRITE: bw=150MiB/s (157MB/s), 150MiB/s-150MiB/s (157MB/s-157MB/s), io=748MiB (785MB), run=5001-5001msec 00:14:24.618 ----------------------------------------------------- 00:14:24.618 Suppressions used: 00:14:24.618 count bytes template 00:14:24.618 1 11 /usr/src/fio/parse.c 00:14:24.618 1 8 libtcmalloc_minimal.so 00:14:24.618 1 904 libcrypto.so 00:14:24.618 ----------------------------------------------------- 00:14:24.618 00:14:24.618 00:14:24.618 real 0m13.486s 00:14:24.618 user 0m8.446s 00:14:24.618 sys 0m4.344s 00:14:24.618 ************************************ 00:14:24.618 END TEST xnvme_fio_plugin 00:14:24.618 ************************************ 00:14:24.618 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.618 10:13:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:24.876 10:13:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:24.876 10:13:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:24.876 10:13:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.876 10:13:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:24.876 ************************************ 00:14:24.876 START TEST xnvme_rpc 00:14:24.876 ************************************ 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70947 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70947 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70947 ']' 00:14:24.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.876 10:13:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:24.876 [2024-12-06 10:13:30.889506] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:24.876 [2024-12-06 10:13:30.889631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70947 ] 00:14:25.133 [2024-12-06 10:13:31.048072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.133 [2024-12-06 10:13:31.148340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 xnvme_bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70947 00:14:25.717 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70947 ']' 00:14:25.974 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70947 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70947 00:14:25.975 killing process with pid 70947 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70947' 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70947 00:14:25.975 10:13:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70947 00:14:27.346 00:14:27.346 real 0m2.613s 00:14:27.346 user 0m2.691s 00:14:27.346 sys 0m0.365s 00:14:27.346 10:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.346 10:13:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.346 ************************************ 00:14:27.346 END TEST xnvme_rpc 00:14:27.346 ************************************ 00:14:27.346 10:13:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:27.346 10:13:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.346 10:13:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.346 10:13:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.346 ************************************ 00:14:27.346 START TEST xnvme_bdevperf 00:14:27.346 ************************************ 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.346 10:13:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.605 { 00:14:27.605 "subsystems": [ 00:14:27.605 { 00:14:27.605 "subsystem": "bdev", 00:14:27.605 "config": [ 00:14:27.605 { 00:14:27.605 "params": { 00:14:27.605 "io_mechanism": "io_uring_cmd", 00:14:27.605 "conserve_cpu": false, 00:14:27.605 "filename": "/dev/ng0n1", 00:14:27.605 "name": "xnvme_bdev" 00:14:27.605 }, 00:14:27.605 "method": "bdev_xnvme_create" 00:14:27.605 }, 00:14:27.605 { 00:14:27.605 "method": "bdev_wait_for_examine" 00:14:27.605 } 00:14:27.605 ] 00:14:27.605 } 00:14:27.605 ] 00:14:27.605 } 00:14:27.605 [2024-12-06 10:13:33.554347] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:27.605 [2024-12-06 10:13:33.554476] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71021 ] 00:14:27.605 [2024-12-06 10:13:33.715222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.863 [2024-12-06 10:13:33.817939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.122 Running I/O for 5 seconds... 00:14:29.984 36826.00 IOPS, 143.85 MiB/s [2024-12-06T10:13:37.082Z] 36590.50 IOPS, 142.93 MiB/s [2024-12-06T10:13:38.452Z] 36238.33 IOPS, 141.56 MiB/s [2024-12-06T10:13:39.387Z] 36485.50 IOPS, 142.52 MiB/s [2024-12-06T10:13:39.387Z] 36741.80 IOPS, 143.52 MiB/s 00:14:33.220 Latency(us) 00:14:33.220 [2024-12-06T10:13:39.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.221 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:33.221 xnvme_bdev : 5.00 36729.61 143.48 0.00 0.00 1738.50 318.23 42144.69 00:14:33.221 [2024-12-06T10:13:39.388Z] =================================================================================================================== 00:14:33.221 [2024-12-06T10:13:39.388Z] Total : 36729.61 143.48 0.00 0.00 1738.50 318.23 42144.69 00:14:33.788 10:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.788 10:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:33.788 10:13:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.788 10:13:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.788 10:13:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.788 { 00:14:33.788 "subsystems": [ 00:14:33.788 { 00:14:33.788 "subsystem": "bdev", 00:14:33.788 "config": [ 00:14:33.788 { 00:14:33.788 "params": { 00:14:33.788 "io_mechanism": "io_uring_cmd", 00:14:33.788 "conserve_cpu": false, 00:14:33.788 "filename": "/dev/ng0n1", 00:14:33.788 "name": "xnvme_bdev" 00:14:33.788 }, 00:14:33.788 "method": "bdev_xnvme_create" 00:14:33.788 }, 00:14:33.788 { 00:14:33.788 "method": "bdev_wait_for_examine" 00:14:33.788 } 00:14:33.788 ] 00:14:33.788 } 00:14:33.788 ] 00:14:33.788 } 00:14:33.788 [2024-12-06 10:13:39.844775] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:33.788 [2024-12-06 10:13:39.844891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:14:34.046 [2024-12-06 10:13:40.005712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.046 [2024-12-06 10:13:40.106872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.304 Running I/O for 5 seconds... 00:14:36.610 12213.00 IOPS, 47.71 MiB/s [2024-12-06T10:13:43.712Z] 9030.50 IOPS, 35.28 MiB/s [2024-12-06T10:13:44.648Z] 7737.33 IOPS, 30.22 MiB/s [2024-12-06T10:13:45.588Z] 8414.00 IOPS, 32.87 MiB/s [2024-12-06T10:13:45.588Z] 9309.40 IOPS, 36.36 MiB/s 00:14:39.421 Latency(us) 00:14:39.421 [2024-12-06T10:13:45.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.421 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:39.421 xnvme_bdev : 5.01 9299.29 36.33 0.00 0.00 6869.15 56.32 596881.72 00:14:39.421 [2024-12-06T10:13:45.588Z] =================================================================================================================== 00:14:39.421 [2024-12-06T10:13:45.588Z] Total : 9299.29 36.33 0.00 0.00 6869.15 56.32 596881.72 00:14:39.983 10:13:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.983 10:13:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:39.983 10:13:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:39.983 10:13:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:39.983 10:13:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.983 { 00:14:39.983 "subsystems": [ 00:14:39.983 { 00:14:39.983 "subsystem": "bdev", 00:14:39.983 "config": [ 00:14:39.983 { 00:14:39.983 "params": { 00:14:39.983 "io_mechanism": "io_uring_cmd", 00:14:39.983 "conserve_cpu": false, 00:14:39.983 "filename": "/dev/ng0n1", 00:14:39.983 "name": "xnvme_bdev" 00:14:39.983 }, 00:14:39.983 "method": "bdev_xnvme_create" 00:14:39.983 }, 00:14:39.983 { 00:14:39.983 "method": "bdev_wait_for_examine" 00:14:39.983 } 00:14:39.983 ] 00:14:39.983 } 00:14:39.983 ] 00:14:39.983 } 00:14:40.241 [2024-12-06 10:13:46.153435] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:40.241 [2024-12-06 10:13:46.153560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:14:40.241 [2024-12-06 10:13:46.314238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.498 [2024-12-06 10:13:46.414457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.498 Running I/O for 5 seconds... 00:14:42.802 64704.00 IOPS, 252.75 MiB/s [2024-12-06T10:13:49.899Z] 64768.00 IOPS, 253.00 MiB/s [2024-12-06T10:13:50.832Z] 64490.67 IOPS, 251.92 MiB/s [2024-12-06T10:13:51.765Z] 64192.00 IOPS, 250.75 MiB/s [2024-12-06T10:13:51.765Z] 68544.00 IOPS, 267.75 MiB/s 00:14:45.598 Latency(us) 00:14:45.598 [2024-12-06T10:13:51.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.598 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:45.598 xnvme_bdev : 5.00 68512.09 267.63 0.00 0.00 930.62 419.05 3982.57 00:14:45.598 [2024-12-06T10:13:51.765Z] =================================================================================================================== 00:14:45.598 [2024-12-06T10:13:51.765Z] Total : 68512.09 267.63 0.00 0.00 930.62 419.05 3982.57 00:14:46.164 10:13:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.164 10:13:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:46.164 10:13:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:46.164 10:13:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.164 10:13:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.164 { 00:14:46.164 "subsystems": [ 00:14:46.164 { 00:14:46.164 "subsystem": "bdev", 00:14:46.164 "config": [ 00:14:46.164 { 00:14:46.164 "params": { 00:14:46.164 "io_mechanism": "io_uring_cmd", 00:14:46.164 "conserve_cpu": false, 00:14:46.164 "filename": "/dev/ng0n1", 00:14:46.164 "name": "xnvme_bdev" 00:14:46.164 }, 00:14:46.164 "method": "bdev_xnvme_create" 00:14:46.164 }, 00:14:46.164 { 00:14:46.164 "method": "bdev_wait_for_examine" 00:14:46.164 } 00:14:46.164 ] 00:14:46.164 } 00:14:46.164 ] 00:14:46.164 } 00:14:46.164 [2024-12-06 10:13:52.272538] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:14:46.164 [2024-12-06 10:13:52.272654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71241 ] 00:14:46.422 [2024-12-06 10:13:52.429211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.422 [2024-12-06 10:13:52.503185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.681 Running I/O for 5 seconds... 00:14:48.639 205.00 IOPS, 0.80 MiB/s [2024-12-06T10:13:56.176Z] 176.50 IOPS, 0.69 MiB/s [2024-12-06T10:13:56.739Z] 172.33 IOPS, 0.67 MiB/s [2024-12-06T10:13:58.116Z] 287.50 IOPS, 1.12 MiB/s [2024-12-06T10:13:58.117Z] 268.40 IOPS, 1.05 MiB/s 00:14:51.950 Latency(us) 00:14:51.950 [2024-12-06T10:13:58.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.950 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:51.950 xnvme_bdev : 5.28 266.39 1.04 0.00 0.00 233726.32 38.01 858219.13 00:14:51.950 [2024-12-06T10:13:58.117Z] =================================================================================================================== 00:14:51.950 [2024-12-06T10:13:58.117Z] Total : 266.39 1.04 0.00 0.00 233726.32 38.01 858219.13 00:14:52.521 00:14:52.521 real 0m25.038s 00:14:52.521 user 0m13.703s 00:14:52.521 sys 0m10.895s 00:14:52.521 10:13:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.521 ************************************ 00:14:52.521 END TEST xnvme_bdevperf 00:14:52.521 ************************************ 00:14:52.521 10:13:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.521 10:13:58 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:52.521 10:13:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:52.521 10:13:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.521 10:13:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.521 ************************************ 00:14:52.521 START TEST xnvme_fio_plugin 00:14:52.521 ************************************ 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.521 10:13:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.521 { 00:14:52.521 "subsystems": [ 00:14:52.521 { 00:14:52.521 "subsystem": "bdev", 00:14:52.521 "config": [ 00:14:52.521 { 00:14:52.521 "params": { 00:14:52.521 "io_mechanism": "io_uring_cmd", 00:14:52.521 "conserve_cpu": false, 00:14:52.521 "filename": "/dev/ng0n1", 00:14:52.521 "name": "xnvme_bdev" 00:14:52.521 }, 00:14:52.521 "method": "bdev_xnvme_create" 00:14:52.521 }, 00:14:52.521 { 00:14:52.521 "method": "bdev_wait_for_examine" 00:14:52.521 } 00:14:52.521 ] 00:14:52.521 } 00:14:52.521 ] 00:14:52.521 } 00:14:52.781 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:52.781 fio-3.35 00:14:52.781 Starting 1 thread 00:14:59.358 00:14:59.358 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71354: Fri Dec 6 10:14:04 2024 00:14:59.358 read: IOPS=48.5k, BW=190MiB/s (199MB/s)(948MiB/5001msec) 00:14:59.358 slat (nsec): min=2210, max=62978, avg=3583.99, stdev=1548.14 00:14:59.358 clat (usec): min=180, max=16048, avg=1182.34, stdev=509.89 00:14:59.358 lat (usec): min=183, max=16052, avg=1185.92, stdev=510.03 00:14:59.358 clat percentiles (usec): 00:14:59.358 | 1.00th=[ 644], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 766], 00:14:59.358 | 30.00th=[ 816], 40.00th=[ 873], 50.00th=[ 996], 60.00th=[ 1221], 00:14:59.358 | 70.00th=[ 1450], 80.00th=[ 1631], 90.00th=[ 1811], 95.00th=[ 1975], 00:14:59.358 | 99.00th=[ 2507], 99.50th=[ 2966], 99.90th=[ 4621], 99.95th=[ 5997], 00:14:59.358 | 99.99th=[11076] 00:14:59.358 bw ( KiB/s): min=141568, max=272384, per=100.00%, avg=200280.00, stdev=54277.04, samples=9 00:14:59.358 iops : min=35392, max=68096, avg=50070.00, stdev=13569.26, samples=9 00:14:59.358 lat (usec) : 250=0.01%, 500=0.03%, 750=17.57%, 1000=32.47% 00:14:59.358 lat (msec) : 2=45.41%, 4=4.37%, 10=0.12%, 20=0.01% 00:14:59.358 cpu : usr=37.98%, sys=61.08%, ctx=28, majf=0, minf=762 00:14:59.358 IO depths : 1=1.3%, 2=2.7%, 4=5.9%, 8=12.1%, 16=24.7%, 32=51.5%, >=64=1.7% 00:14:59.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.358 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:59.358 issued rwts: total=242697,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.358 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:59.358 00:14:59.358 Run status group 0 (all jobs): 00:14:59.358 READ: bw=190MiB/s (199MB/s), 190MiB/s-190MiB/s (199MB/s-199MB/s), io=948MiB (994MB), run=5001-5001msec 00:14:59.358 ----------------------------------------------------- 00:14:59.358 Suppressions used: 00:14:59.358 count bytes template 00:14:59.358 1 11 /usr/src/fio/parse.c 00:14:59.358 1 8 libtcmalloc_minimal.so 00:14:59.358 1 904 libcrypto.so 00:14:59.358 ----------------------------------------------------- 00:14:59.358 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.358 10:14:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.358 { 00:14:59.358 "subsystems": [ 00:14:59.358 { 00:14:59.358 "subsystem": "bdev", 00:14:59.358 "config": [ 00:14:59.358 { 00:14:59.358 "params": { 00:14:59.358 "io_mechanism": "io_uring_cmd", 00:14:59.358 "conserve_cpu": false, 00:14:59.358 "filename": "/dev/ng0n1", 00:14:59.358 "name": "xnvme_bdev" 00:14:59.358 }, 00:14:59.358 "method": "bdev_xnvme_create" 00:14:59.358 }, 00:14:59.358 { 00:14:59.358 "method": "bdev_wait_for_examine" 00:14:59.358 } 00:14:59.358 ] 00:14:59.358 } 00:14:59.358 ] 00:14:59.358 } 00:14:59.619 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.619 fio-3.35 00:14:59.619 Starting 1 thread 00:15:06.204 00:15:06.204 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71449: Fri Dec 6 10:14:11 2024 00:15:06.204 write: IOPS=32.6k, BW=127MiB/s (133MB/s)(636MiB/5002msec); 0 zone resets 00:15:06.204 slat (nsec): min=2905, max=76163, avg=3898.52, stdev=2404.25 00:15:06.204 clat (usec): min=89, max=11880, avg=1813.85, stdev=659.12 00:15:06.204 lat (usec): min=93, max=11883, avg=1817.75, stdev=659.34 00:15:06.204 clat percentiles (usec): 00:15:06.204 | 1.00th=[ 840], 5.00th=[ 1221], 10.00th=[ 1336], 20.00th=[ 1467], 00:15:06.204 | 30.00th=[ 1565], 40.00th=[ 1647], 50.00th=[ 1729], 60.00th=[ 1811], 00:15:06.204 | 70.00th=[ 1909], 80.00th=[ 2040], 90.00th=[ 2245], 95.00th=[ 2474], 00:15:06.204 | 99.00th=[ 4883], 99.50th=[ 6390], 99.90th=[ 8356], 99.95th=[ 9372], 00:15:06.204 | 99.99th=[10028] 00:15:06.204 bw ( KiB/s): min=126184, max=135592, per=100.00%, avg=130239.22, stdev=3338.62, samples=9 00:15:06.204 iops : min=31546, max=33898, avg=32560.00, stdev=834.86, samples=9 00:15:06.204 lat (usec) : 100=0.01%, 250=0.03%, 500=0.13%, 750=0.43%, 1000=1.36% 00:15:06.204 lat (msec) : 2=75.56%, 4=21.03%, 10=1.45%, 20=0.02% 00:15:06.204 cpu : usr=37.13%, sys=61.53%, ctx=9, majf=0, minf=763 00:15:06.204 IO depths : 1=1.3%, 2=2.7%, 4=5.5%, 8=11.4%, 16=24.1%, 32=53.0%, >=64=1.9% 00:15:06.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.204 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:06.204 issued rwts: total=0,162870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.204 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.204 00:15:06.204 Run status group 0 (all jobs): 00:15:06.204 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=636MiB (667MB), run=5002-5002msec 00:15:06.204 ----------------------------------------------------- 00:15:06.204 Suppressions used: 00:15:06.204 count bytes template 00:15:06.204 1 11 /usr/src/fio/parse.c 00:15:06.204 1 8 libtcmalloc_minimal.so 00:15:06.204 1 904 libcrypto.so 00:15:06.204 ----------------------------------------------------- 00:15:06.204 00:15:06.204 00:15:06.204 real 0m13.651s 00:15:06.204 user 0m6.551s 00:15:06.204 sys 0m6.657s 00:15:06.204 10:14:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.204 ************************************ 00:15:06.204 END TEST xnvme_fio_plugin 00:15:06.204 ************************************ 00:15:06.204 10:14:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.204 10:14:12 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:06.204 10:14:12 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:06.204 10:14:12 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:06.204 10:14:12 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:06.204 10:14:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.204 10:14:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.204 10:14:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.204 ************************************ 00:15:06.204 START TEST xnvme_rpc 00:15:06.204 ************************************ 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71530 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71530 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71530 ']' 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.204 10:14:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.466 [2024-12-06 10:14:12.403233] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:06.466 [2024-12-06 10:14:12.403392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71530 ] 00:15:06.466 [2024-12-06 10:14:12.566984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.728 [2024-12-06 10:14:12.689390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.325 xnvme_bdev 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:07.325 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71530 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71530 ']' 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71530 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71530 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.586 killing process with pid 71530 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71530' 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71530 00:15:07.586 10:14:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71530 00:15:09.503 00:15:09.503 real 0m2.907s 00:15:09.503 user 0m2.910s 00:15:09.503 sys 0m0.471s 00:15:09.503 10:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.503 10:14:15 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.503 ************************************ 00:15:09.503 END TEST xnvme_rpc 00:15:09.503 ************************************ 00:15:09.503 10:14:15 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:09.503 10:14:15 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.503 10:14:15 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.503 10:14:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.503 ************************************ 00:15:09.503 START TEST xnvme_bdevperf 00:15:09.503 ************************************ 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:09.503 10:14:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.503 { 00:15:09.503 "subsystems": [ 00:15:09.503 { 00:15:09.503 "subsystem": "bdev", 00:15:09.503 "config": [ 00:15:09.503 { 00:15:09.503 "params": { 00:15:09.503 "io_mechanism": "io_uring_cmd", 00:15:09.503 "conserve_cpu": true, 00:15:09.503 "filename": "/dev/ng0n1", 00:15:09.503 "name": "xnvme_bdev" 00:15:09.503 }, 00:15:09.503 "method": "bdev_xnvme_create" 00:15:09.503 }, 00:15:09.503 { 00:15:09.503 "method": "bdev_wait_for_examine" 00:15:09.503 } 00:15:09.503 ] 00:15:09.503 } 00:15:09.503 ] 00:15:09.503 } 00:15:09.503 [2024-12-06 10:14:15.361421] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:09.503 [2024-12-06 10:14:15.361579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71604 ] 00:15:09.503 [2024-12-06 10:14:15.526097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.504 [2024-12-06 10:14:15.648441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.763 Running I/O for 5 seconds... 00:15:12.083 39425.00 IOPS, 154.00 MiB/s [2024-12-06T10:14:19.191Z] 39135.50 IOPS, 152.87 MiB/s [2024-12-06T10:14:20.131Z] 37211.67 IOPS, 145.36 MiB/s [2024-12-06T10:14:21.096Z] 36433.25 IOPS, 142.32 MiB/s 00:15:14.929 Latency(us) 00:15:14.929 [2024-12-06T10:14:21.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.930 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:14.930 xnvme_bdev : 5.00 35893.53 140.21 0.00 0.00 1778.87 346.58 26819.35 00:15:14.930 [2024-12-06T10:14:21.097Z] =================================================================================================================== 00:15:14.930 [2024-12-06T10:14:21.097Z] Total : 35893.53 140.21 0.00 0.00 1778.87 346.58 26819.35 00:15:15.872 10:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.873 10:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:15.873 10:14:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.873 10:14:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.873 10:14:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.873 { 00:15:15.873 "subsystems": [ 00:15:15.873 { 00:15:15.873 "subsystem": "bdev", 00:15:15.873 "config": [ 00:15:15.873 { 00:15:15.873 "params": { 00:15:15.873 "io_mechanism": "io_uring_cmd", 00:15:15.873 "conserve_cpu": true, 00:15:15.873 "filename": "/dev/ng0n1", 00:15:15.873 "name": "xnvme_bdev" 00:15:15.873 }, 00:15:15.873 "method": "bdev_xnvme_create" 00:15:15.873 }, 00:15:15.873 { 00:15:15.873 "method": "bdev_wait_for_examine" 00:15:15.873 } 00:15:15.873 ] 00:15:15.873 } 00:15:15.873 ] 00:15:15.873 } 00:15:15.873 [2024-12-06 10:14:21.769432] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:15.873 [2024-12-06 10:14:21.769594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71678 ] 00:15:15.873 [2024-12-06 10:14:21.934971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.155 [2024-12-06 10:14:22.056761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.422 Running I/O for 5 seconds... 00:15:18.305 35277.00 IOPS, 137.80 MiB/s [2024-12-06T10:14:25.410Z] 36735.00 IOPS, 143.50 MiB/s [2024-12-06T10:14:26.790Z] 36197.33 IOPS, 141.40 MiB/s [2024-12-06T10:14:27.362Z] 36309.75 IOPS, 141.83 MiB/s 00:15:21.195 Latency(us) 00:15:21.195 [2024-12-06T10:14:27.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.195 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:21.195 xnvme_bdev : 5.00 36159.29 141.25 0.00 0.00 1765.23 633.30 6704.84 00:15:21.195 [2024-12-06T10:14:27.362Z] =================================================================================================================== 00:15:21.195 [2024-12-06T10:14:27.362Z] Total : 36159.29 141.25 0.00 0.00 1765.23 633.30 6704.84 00:15:22.140 10:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.140 10:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:22.140 10:14:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.140 10:14:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.140 10:14:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.140 { 00:15:22.140 "subsystems": [ 00:15:22.140 { 00:15:22.140 "subsystem": "bdev", 00:15:22.140 "config": [ 00:15:22.140 { 00:15:22.140 "params": { 00:15:22.140 "io_mechanism": "io_uring_cmd", 00:15:22.140 "conserve_cpu": true, 00:15:22.140 "filename": "/dev/ng0n1", 00:15:22.140 "name": "xnvme_bdev" 00:15:22.140 }, 00:15:22.140 "method": "bdev_xnvme_create" 00:15:22.140 }, 00:15:22.140 { 00:15:22.140 "method": "bdev_wait_for_examine" 00:15:22.140 } 00:15:22.140 ] 00:15:22.140 } 00:15:22.140 ] 00:15:22.140 } 00:15:22.140 [2024-12-06 10:14:28.211770] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:22.140 [2024-12-06 10:14:28.211907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71747 ] 00:15:22.399 [2024-12-06 10:14:28.374148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.399 [2024-12-06 10:14:28.503797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.657 Running I/O for 5 seconds... 00:15:24.978 75968.00 IOPS, 296.75 MiB/s [2024-12-06T10:14:32.195Z] 77664.00 IOPS, 303.38 MiB/s [2024-12-06T10:14:33.136Z] 78378.67 IOPS, 306.17 MiB/s [2024-12-06T10:14:34.078Z] 80656.00 IOPS, 315.06 MiB/s 00:15:27.911 Latency(us) 00:15:27.911 [2024-12-06T10:14:34.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.911 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:27.911 xnvme_bdev : 5.00 80470.68 314.34 0.00 0.00 791.84 341.86 2671.85 00:15:27.911 [2024-12-06T10:14:34.078Z] =================================================================================================================== 00:15:27.911 [2024-12-06T10:14:34.078Z] Total : 80470.68 314.34 0.00 0.00 791.84 341.86 2671.85 00:15:28.481 10:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.481 10:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:28.481 10:14:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:28.481 10:14:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:28.481 10:14:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.481 { 00:15:28.481 "subsystems": [ 00:15:28.481 { 00:15:28.481 "subsystem": "bdev", 00:15:28.481 "config": [ 00:15:28.481 { 00:15:28.481 "params": { 00:15:28.481 "io_mechanism": "io_uring_cmd", 00:15:28.481 "conserve_cpu": true, 00:15:28.481 "filename": "/dev/ng0n1", 00:15:28.481 "name": "xnvme_bdev" 00:15:28.481 }, 00:15:28.481 "method": "bdev_xnvme_create" 00:15:28.481 }, 00:15:28.481 { 00:15:28.481 "method": "bdev_wait_for_examine" 00:15:28.481 } 00:15:28.481 ] 00:15:28.481 } 00:15:28.481 ] 00:15:28.481 } 00:15:28.481 [2024-12-06 10:14:34.519775] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:28.481 [2024-12-06 10:14:34.519894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71821 ] 00:15:28.742 [2024-12-06 10:14:34.677346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.742 [2024-12-06 10:14:34.756925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.002 Running I/O for 5 seconds... 00:15:30.897 42417.00 IOPS, 165.69 MiB/s [2024-12-06T10:14:38.006Z] 35793.00 IOPS, 139.82 MiB/s [2024-12-06T10:14:39.378Z] 33273.00 IOPS, 129.97 MiB/s [2024-12-06T10:14:39.995Z] 31200.00 IOPS, 121.88 MiB/s [2024-12-06T10:14:39.995Z] 30002.60 IOPS, 117.20 MiB/s 00:15:33.828 Latency(us) 00:15:33.828 [2024-12-06T10:14:39.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.828 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:33.828 xnvme_bdev : 5.01 29981.50 117.12 0.00 0.00 2129.99 50.81 23996.26 00:15:33.828 [2024-12-06T10:14:39.995Z] =================================================================================================================== 00:15:33.828 [2024-12-06T10:14:39.995Z] Total : 29981.50 117.12 0.00 0.00 2129.99 50.81 23996.26 00:15:34.771 00:15:34.771 real 0m25.441s 00:15:34.771 user 0m17.360s 00:15:34.771 sys 0m6.230s 00:15:34.771 10:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.771 10:14:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.771 ************************************ 00:15:34.771 END TEST xnvme_bdevperf 00:15:34.771 ************************************ 00:15:34.771 10:14:40 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:34.771 10:14:40 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.771 10:14:40 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.771 10:14:40 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.771 ************************************ 00:15:34.771 START TEST xnvme_fio_plugin 00:15:34.771 ************************************ 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.771 10:14:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.771 { 00:15:34.771 "subsystems": [ 00:15:34.771 { 00:15:34.771 "subsystem": "bdev", 00:15:34.771 "config": [ 00:15:34.771 { 00:15:34.771 "params": { 00:15:34.771 "io_mechanism": "io_uring_cmd", 00:15:34.771 "conserve_cpu": true, 00:15:34.771 "filename": "/dev/ng0n1", 00:15:34.771 "name": "xnvme_bdev" 00:15:34.771 }, 00:15:34.771 "method": "bdev_xnvme_create" 00:15:34.771 }, 00:15:34.771 { 00:15:34.771 "method": "bdev_wait_for_examine" 00:15:34.771 } 00:15:34.771 ] 00:15:34.771 } 00:15:34.771 ] 00:15:34.771 } 00:15:35.032 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:35.032 fio-3.35 00:15:35.032 Starting 1 thread 00:15:41.582 00:15:41.582 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71938: Fri Dec 6 10:14:46 2024 00:15:41.582 read: IOPS=39.5k, BW=154MiB/s (162MB/s)(772MiB/5001msec) 00:15:41.582 slat (nsec): min=2857, max=69537, avg=3550.20, stdev=1719.35 00:15:41.582 clat (usec): min=703, max=3917, avg=1478.55, stdev=308.68 00:15:41.582 lat (usec): min=706, max=3953, avg=1482.10, stdev=308.95 00:15:41.582 clat percentiles (usec): 00:15:41.582 | 1.00th=[ 938], 5.00th=[ 1057], 10.00th=[ 1123], 20.00th=[ 1221], 00:15:41.582 | 30.00th=[ 1303], 40.00th=[ 1369], 50.00th=[ 1434], 60.00th=[ 1516], 00:15:41.582 | 70.00th=[ 1582], 80.00th=[ 1696], 90.00th=[ 1876], 95.00th=[ 2057], 00:15:41.582 | 99.00th=[ 2409], 99.50th=[ 2573], 99.90th=[ 3097], 99.95th=[ 3359], 00:15:41.582 | 99.99th=[ 3720] 00:15:41.582 bw ( KiB/s): min=139776, max=168960, per=100.00%, avg=158577.78, stdev=8371.15, samples=9 00:15:41.582 iops : min=34944, max=42240, avg=39644.44, stdev=2092.79, samples=9 00:15:41.582 lat (usec) : 750=0.01%, 1000=2.60% 00:15:41.582 lat (msec) : 2=91.18%, 4=6.21% 00:15:41.582 cpu : usr=62.78%, sys=34.82%, ctx=36, majf=0, minf=762 00:15:41.582 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.582 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:41.582 issued rwts: total=197728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.582 00:15:41.582 Run status group 0 (all jobs): 00:15:41.582 READ: bw=154MiB/s (162MB/s), 154MiB/s-154MiB/s (162MB/s-162MB/s), io=772MiB (810MB), run=5001-5001msec 00:15:41.582 ----------------------------------------------------- 00:15:41.582 Suppressions used: 00:15:41.582 count bytes template 00:15:41.582 1 11 /usr/src/fio/parse.c 00:15:41.582 1 8 libtcmalloc_minimal.so 00:15:41.582 1 904 libcrypto.so 00:15:41.582 ----------------------------------------------------- 00:15:41.582 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.582 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.583 10:14:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.583 { 00:15:41.583 "subsystems": [ 00:15:41.583 { 00:15:41.583 "subsystem": "bdev", 00:15:41.583 "config": [ 00:15:41.583 { 00:15:41.583 "params": { 00:15:41.583 "io_mechanism": "io_uring_cmd", 00:15:41.583 "conserve_cpu": true, 00:15:41.583 "filename": "/dev/ng0n1", 00:15:41.583 "name": "xnvme_bdev" 00:15:41.583 }, 00:15:41.583 "method": "bdev_xnvme_create" 00:15:41.583 }, 00:15:41.583 { 00:15:41.583 "method": "bdev_wait_for_examine" 00:15:41.583 } 00:15:41.583 ] 00:15:41.583 } 00:15:41.583 ] 00:15:41.583 } 00:15:41.840 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:41.840 fio-3.35 00:15:41.840 Starting 1 thread 00:15:48.398 00:15:48.398 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72030: Fri Dec 6 10:14:53 2024 00:15:48.398 write: IOPS=40.6k, BW=159MiB/s (166MB/s)(794MiB/5001msec); 0 zone resets 00:15:48.398 slat (nsec): min=2895, max=65126, avg=4009.44, stdev=2237.20 00:15:48.398 clat (usec): min=428, max=4991, avg=1416.84, stdev=276.56 00:15:48.398 lat (usec): min=432, max=4994, avg=1420.85, stdev=277.26 00:15:48.398 clat percentiles (usec): 00:15:48.398 | 1.00th=[ 930], 5.00th=[ 1045], 10.00th=[ 1106], 20.00th=[ 1188], 00:15:48.398 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1385], 60.00th=[ 1450], 00:15:48.398 | 70.00th=[ 1516], 80.00th=[ 1598], 90.00th=[ 1762], 95.00th=[ 1909], 00:15:48.398 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 2802], 99.95th=[ 3130], 00:15:48.398 | 99.99th=[ 3687] 00:15:48.398 bw ( KiB/s): min=157352, max=169176, per=100.00%, avg=162620.33, stdev=3869.97, samples=9 00:15:48.398 iops : min=39338, max=42294, avg=40655.00, stdev=967.45, samples=9 00:15:48.398 lat (usec) : 500=0.01%, 750=0.01%, 1000=2.89% 00:15:48.398 lat (msec) : 2=93.61%, 4=3.50%, 10=0.01% 00:15:48.398 cpu : usr=62.14%, sys=34.66%, ctx=13, majf=0, minf=763 00:15:48.398 IO depths : 1=1.5%, 2=3.0%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.2%, >=64=1.6% 00:15:48.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.398 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:48.398 issued rwts: total=0,203253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.398 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.398 00:15:48.398 Run status group 0 (all jobs): 00:15:48.398 WRITE: bw=159MiB/s (166MB/s), 159MiB/s-159MiB/s (166MB/s-166MB/s), io=794MiB (833MB), run=5001-5001msec 00:15:48.398 ----------------------------------------------------- 00:15:48.398 Suppressions used: 00:15:48.398 count bytes template 00:15:48.398 1 11 /usr/src/fio/parse.c 00:15:48.398 1 8 libtcmalloc_minimal.so 00:15:48.398 1 904 libcrypto.so 00:15:48.398 ----------------------------------------------------- 00:15:48.398 00:15:48.398 00:15:48.398 real 0m13.565s 00:15:48.398 user 0m8.930s 00:15:48.398 sys 0m4.037s 00:15:48.398 10:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.398 ************************************ 00:15:48.398 END TEST xnvme_fio_plugin 00:15:48.398 ************************************ 00:15:48.398 10:14:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.398 Process with pid 71530 is not found 00:15:48.398 10:14:54 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71530 00:15:48.398 10:14:54 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71530 ']' 00:15:48.398 10:14:54 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71530 00:15:48.398 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71530) - No such process 00:15:48.398 10:14:54 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71530 is not found' 00:15:48.398 ************************************ 00:15:48.398 00:15:48.398 real 3m27.439s 00:15:48.398 user 2m0.598s 00:15:48.398 sys 1m12.308s 00:15:48.398 10:14:54 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.398 10:14:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.398 END TEST nvme_xnvme 00:15:48.398 ************************************ 00:15:48.398 10:14:54 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.398 10:14:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.398 10:14:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.398 10:14:54 -- common/autotest_common.sh@10 -- # set +x 00:15:48.398 ************************************ 00:15:48.398 START TEST blockdev_xnvme 00:15:48.398 ************************************ 00:15:48.398 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.398 * Looking for test storage... 00:15:48.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:48.398 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.398 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.398 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.658 10:14:54 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.658 --rc genhtml_branch_coverage=1 00:15:48.658 --rc genhtml_function_coverage=1 00:15:48.658 --rc genhtml_legend=1 00:15:48.658 --rc geninfo_all_blocks=1 00:15:48.658 --rc geninfo_unexecuted_blocks=1 00:15:48.658 00:15:48.658 ' 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.658 --rc genhtml_branch_coverage=1 00:15:48.658 --rc genhtml_function_coverage=1 00:15:48.658 --rc genhtml_legend=1 00:15:48.658 --rc geninfo_all_blocks=1 00:15:48.658 --rc geninfo_unexecuted_blocks=1 00:15:48.658 00:15:48.658 ' 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.658 --rc genhtml_branch_coverage=1 00:15:48.658 --rc genhtml_function_coverage=1 00:15:48.658 --rc genhtml_legend=1 00:15:48.658 --rc geninfo_all_blocks=1 00:15:48.658 --rc geninfo_unexecuted_blocks=1 00:15:48.658 00:15:48.658 ' 00:15:48.658 10:14:54 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.658 --rc genhtml_branch_coverage=1 00:15:48.658 --rc genhtml_function_coverage=1 00:15:48.658 --rc genhtml_legend=1 00:15:48.658 --rc geninfo_all_blocks=1 00:15:48.658 --rc geninfo_unexecuted_blocks=1 00:15:48.658 00:15:48.658 ' 00:15:48.658 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:48.658 10:14:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:48.658 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:48.658 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:48.658 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:48.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72161 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72161 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72161 ']' 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.659 10:14:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.659 10:14:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.659 [2024-12-06 10:14:54.710155] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:48.659 [2024-12-06 10:14:54.710275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72161 ] 00:15:48.917 [2024-12-06 10:14:54.867060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.917 [2024-12-06 10:14:54.961784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.481 10:14:55 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.481 10:14:55 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:49.481 10:14:55 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:49.481 10:14:55 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:49.481 10:14:55 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:49.481 10:14:55 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:49.481 10:14:55 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.047 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.306 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:50.306 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:50.565 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:50.565 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.565 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.565 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:50.566 nvme0n1 00:15:50.566 nvme0n2 00:15:50.566 nvme0n3 00:15:50.566 nvme1n1 00:15:50.566 nvme2n1 00:15:50.566 nvme3n1 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.566 10:14:56 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:50.566 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "68b6a75b-2571-4b5e-9be6-35a6ddcb0592"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "68b6a75b-2571-4b5e-9be6-35a6ddcb0592",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6aa42ffe-44e2-4ec5-9125-bd8a36ce994d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6aa42ffe-44e2-4ec5-9125-bd8a36ce994d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ae679d5b-52ef-4174-854a-f2d01ab5c488"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae679d5b-52ef-4174-854a-f2d01ab5c488",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "80893f48-4266-4341-a4c7-39de77db2385"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80893f48-4266-4341-a4c7-39de77db2385",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "fa3251f0-5b39-41fd-9221-23e8ca291865"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa3251f0-5b39-41fd-9221-23e8ca291865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69df9585-0070-4859-922d-d802ec373404"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "69df9585-0070-4859-922d-d802ec373404",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:50.838 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:50.838 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:50.838 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:50.838 10:14:56 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72161 00:15:50.838 10:14:56 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72161 ']' 00:15:50.838 10:14:56 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72161 00:15:50.838 10:14:56 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72161 00:15:50.839 killing process with pid 72161 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72161' 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72161 00:15:50.839 10:14:56 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72161 00:15:52.213 10:14:58 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:52.213 10:14:58 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.213 10:14:58 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:52.213 10:14:58 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.213 10:14:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.213 ************************************ 00:15:52.213 START TEST bdev_hello_world 00:15:52.213 ************************************ 00:15:52.213 10:14:58 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.213 [2024-12-06 10:14:58.338415] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:52.213 [2024-12-06 10:14:58.338541] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72447 ] 00:15:52.471 [2024-12-06 10:14:58.490852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.472 [2024-12-06 10:14:58.584504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.050 [2024-12-06 10:14:58.946158] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:53.050 [2024-12-06 10:14:58.946354] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:53.050 [2024-12-06 10:14:58.946376] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:53.050 [2024-12-06 10:14:58.948219] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:53.050 [2024-12-06 10:14:58.950688] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:53.050 [2024-12-06 10:14:58.950718] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:53.050 [2024-12-06 10:14:58.951502] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:53.050 00:15:53.050 [2024-12-06 10:14:58.951551] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:53.615 00:15:53.615 real 0m1.385s 00:15:53.615 user 0m1.070s 00:15:53.615 sys 0m0.173s 00:15:53.615 10:14:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.615 10:14:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:53.615 ************************************ 00:15:53.615 END TEST bdev_hello_world 00:15:53.616 ************************************ 00:15:53.616 10:14:59 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:53.616 10:14:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.616 10:14:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.616 10:14:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.616 ************************************ 00:15:53.616 START TEST bdev_bounds 00:15:53.616 ************************************ 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72478 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:53.616 Process bdevio pid: 72478 00:15:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72478' 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72478 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72478 ']' 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.616 10:14:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:53.873 [2024-12-06 10:14:59.789903] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:53.873 [2024-12-06 10:14:59.790478] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72478 ] 00:15:53.873 [2024-12-06 10:14:59.950981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.131 [2024-12-06 10:15:00.063622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.131 [2024-12-06 10:15:00.064297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.131 [2024-12-06 10:15:00.064424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.697 10:15:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.697 10:15:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:54.697 10:15:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:54.697 I/O targets: 00:15:54.697 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.697 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.697 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.697 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:54.697 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:54.697 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:54.697 00:15:54.697 00:15:54.697 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.697 http://cunit.sourceforge.net/ 00:15:54.697 00:15:54.697 00:15:54.697 Suite: bdevio tests on: nvme3n1 00:15:54.697 Test: blockdev write read block ...passed 00:15:54.697 Test: blockdev write zeroes read block ...passed 00:15:54.697 Test: blockdev write zeroes read no split ...passed 00:15:54.697 Test: blockdev write zeroes read split ...passed 00:15:54.697 Test: blockdev write zeroes read split partial ...passed 00:15:54.697 Test: blockdev reset ...passed 00:15:54.697 Test: blockdev write read 8 blocks ...passed 00:15:54.697 Test: blockdev write read size > 128k ...passed 00:15:54.697 Test: blockdev write read invalid size ...passed 00:15:54.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.697 Test: blockdev write read max offset ...passed 00:15:54.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.697 Test: blockdev writev readv 8 blocks ...passed 00:15:54.697 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.697 Test: blockdev writev readv block ...passed 00:15:54.697 Test: blockdev writev readv size > 128k ...passed 00:15:54.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.697 Test: blockdev comparev and writev ...passed 00:15:54.697 Test: blockdev nvme passthru rw ...passed 00:15:54.697 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.697 Test: blockdev nvme admin passthru ...passed 00:15:54.697 Test: blockdev copy ...passed 00:15:54.697 Suite: bdevio tests on: nvme2n1 00:15:54.697 Test: blockdev write read block ...passed 00:15:54.697 Test: blockdev write zeroes read block ...passed 00:15:54.697 Test: blockdev write zeroes read no split ...passed 00:15:54.697 Test: blockdev write zeroes read split ...passed 00:15:54.697 Test: blockdev write zeroes read split partial ...passed 00:15:54.697 Test: blockdev reset ...passed 00:15:54.697 Test: blockdev write read 8 blocks ...passed 00:15:54.697 Test: blockdev write read size > 128k ...passed 00:15:54.697 Test: blockdev write read invalid size ...passed 00:15:54.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.697 Test: blockdev write read max offset ...passed 00:15:54.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.956 Test: blockdev writev readv 8 blocks ...passed 00:15:54.956 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.956 Test: blockdev writev readv block ...passed 00:15:54.956 Test: blockdev writev readv size > 128k ...passed 00:15:54.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.956 Test: blockdev comparev and writev ...passed 00:15:54.956 Test: blockdev nvme passthru rw ...passed 00:15:54.956 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.956 Test: blockdev nvme admin passthru ...passed 00:15:54.956 Test: blockdev copy ...passed 00:15:54.956 Suite: bdevio tests on: nvme1n1 00:15:54.956 Test: blockdev write read block ...passed 00:15:54.956 Test: blockdev write zeroes read block ...passed 00:15:54.956 Test: blockdev write zeroes read no split ...passed 00:15:54.956 Test: blockdev write zeroes read split ...passed 00:15:54.956 Test: blockdev write zeroes read split partial ...passed 00:15:54.956 Test: blockdev reset ...passed 00:15:54.956 Test: blockdev write read 8 blocks ...passed 00:15:54.956 Test: blockdev write read size > 128k ...passed 00:15:54.956 Test: blockdev write read invalid size ...passed 00:15:54.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.956 Test: blockdev write read max offset ...passed 00:15:54.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.956 Test: blockdev writev readv 8 blocks ...passed 00:15:54.956 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.956 Test: blockdev writev readv block ...passed 00:15:54.956 Test: blockdev writev readv size > 128k ...passed 00:15:54.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.956 Test: blockdev comparev and writev ...passed 00:15:54.956 Test: blockdev nvme passthru rw ...passed 00:15:54.956 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.956 Test: blockdev nvme admin passthru ...passed 00:15:54.956 Test: blockdev copy ...passed 00:15:54.956 Suite: bdevio tests on: nvme0n3 00:15:54.956 Test: blockdev write read block ...passed 00:15:54.956 Test: blockdev write zeroes read block ...passed 00:15:54.956 Test: blockdev write zeroes read no split ...passed 00:15:54.956 Test: blockdev write zeroes read split ...passed 00:15:54.956 Test: blockdev write zeroes read split partial ...passed 00:15:54.956 Test: blockdev reset ...passed 00:15:54.956 Test: blockdev write read 8 blocks ...passed 00:15:54.956 Test: blockdev write read size > 128k ...passed 00:15:54.956 Test: blockdev write read invalid size ...passed 00:15:54.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.956 Test: blockdev write read max offset ...passed 00:15:54.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.956 Test: blockdev writev readv 8 blocks ...passed 00:15:54.956 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.956 Test: blockdev writev readv block ...passed 00:15:54.956 Test: blockdev writev readv size > 128k ...passed 00:15:54.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.956 Test: blockdev comparev and writev ...passed 00:15:54.956 Test: blockdev nvme passthru rw ...passed 00:15:54.956 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.956 Test: blockdev nvme admin passthru ...passed 00:15:54.956 Test: blockdev copy ...passed 00:15:54.956 Suite: bdevio tests on: nvme0n2 00:15:54.956 Test: blockdev write read block ...passed 00:15:54.956 Test: blockdev write zeroes read block ...passed 00:15:54.956 Test: blockdev write zeroes read no split ...passed 00:15:54.956 Test: blockdev write zeroes read split ...passed 00:15:54.956 Test: blockdev write zeroes read split partial ...passed 00:15:54.956 Test: blockdev reset ...passed 00:15:54.956 Test: blockdev write read 8 blocks ...passed 00:15:54.956 Test: blockdev write read size > 128k ...passed 00:15:54.956 Test: blockdev write read invalid size ...passed 00:15:54.956 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.956 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.956 Test: blockdev write read max offset ...passed 00:15:54.956 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.956 Test: blockdev writev readv 8 blocks ...passed 00:15:54.956 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.956 Test: blockdev writev readv block ...passed 00:15:54.956 Test: blockdev writev readv size > 128k ...passed 00:15:54.956 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.956 Test: blockdev comparev and writev ...passed 00:15:54.956 Test: blockdev nvme passthru rw ...passed 00:15:54.956 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.956 Test: blockdev nvme admin passthru ...passed 00:15:54.956 Test: blockdev copy ...passed 00:15:54.956 Suite: bdevio tests on: nvme0n1 00:15:54.956 Test: blockdev write read block ...passed 00:15:54.956 Test: blockdev write zeroes read block ...passed 00:15:55.215 Test: blockdev write zeroes read no split ...passed 00:15:55.215 Test: blockdev write zeroes read split ...passed 00:15:55.215 Test: blockdev write zeroes read split partial ...passed 00:15:55.215 Test: blockdev reset ...passed 00:15:55.215 Test: blockdev write read 8 blocks ...passed 00:15:55.215 Test: blockdev write read size > 128k ...passed 00:15:55.215 Test: blockdev write read invalid size ...passed 00:15:55.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.215 Test: blockdev write read max offset ...passed 00:15:55.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.215 Test: blockdev writev readv 8 blocks ...passed 00:15:55.215 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.215 Test: blockdev writev readv block ...passed 00:15:55.215 Test: blockdev writev readv size > 128k ...passed 00:15:55.215 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.215 Test: blockdev comparev and writev ...passed 00:15:55.215 Test: blockdev nvme passthru rw ...passed 00:15:55.215 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.215 Test: blockdev nvme admin passthru ...passed 00:15:55.215 Test: blockdev copy ...passed 00:15:55.215 00:15:55.215 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.215 suites 6 6 n/a 0 0 00:15:55.215 tests 138 138 138 0 0 00:15:55.215 asserts 780 780 780 0 n/a 00:15:55.215 00:15:55.216 Elapsed time = 1.391 seconds 00:15:55.216 0 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72478 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72478 ']' 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72478 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72478 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72478' 00:15:55.216 killing process with pid 72478 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72478 00:15:55.216 10:15:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72478 00:15:56.149 10:15:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:56.149 00:15:56.149 real 0m2.333s 00:15:56.149 user 0m5.682s 00:15:56.149 sys 0m0.297s 00:15:56.149 10:15:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.149 10:15:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:56.149 ************************************ 00:15:56.149 END TEST bdev_bounds 00:15:56.149 ************************************ 00:15:56.149 10:15:02 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:56.149 10:15:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:56.149 10:15:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.149 10:15:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.150 ************************************ 00:15:56.150 START TEST bdev_nbd 00:15:56.150 ************************************ 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:56.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72537 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72537 /var/tmp/spdk-nbd.sock 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72537 ']' 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:56.150 10:15:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:56.150 [2024-12-06 10:15:02.202221] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:15:56.150 [2024-12-06 10:15:02.202493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.409 [2024-12-06 10:15:02.364796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.409 [2024-12-06 10:15:02.460625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.975 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.975 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.976 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.234 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.235 1+0 records in 00:15:57.235 1+0 records out 00:15:57.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066632 s, 6.1 MB/s 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.235 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.493 1+0 records in 00:15:57.493 1+0 records out 00:15:57.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102879 s, 4.0 MB/s 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.493 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.752 1+0 records in 00:15:57.752 1+0 records out 00:15:57.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795962 s, 5.1 MB/s 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.752 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.011 10:15:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.011 1+0 records in 00:15:58.011 1+0 records out 00:15:58.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802327 s, 5.1 MB/s 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.011 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.269 1+0 records in 00:15:58.269 1+0 records out 00:15:58.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130675 s, 3.1 MB/s 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.269 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.528 1+0 records in 00:15:58.528 1+0 records out 00:15:58.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870573 s, 4.7 MB/s 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd0", 00:15:58.528 "bdev_name": "nvme0n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd1", 00:15:58.528 "bdev_name": "nvme0n2" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd2", 00:15:58.528 "bdev_name": "nvme0n3" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd3", 00:15:58.528 "bdev_name": "nvme1n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd4", 00:15:58.528 "bdev_name": "nvme2n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd5", 00:15:58.528 "bdev_name": "nvme3n1" 00:15:58.528 } 00:15:58.528 ]' 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd0", 00:15:58.528 "bdev_name": "nvme0n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd1", 00:15:58.528 "bdev_name": "nvme0n2" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd2", 00:15:58.528 "bdev_name": "nvme0n3" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd3", 00:15:58.528 "bdev_name": "nvme1n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd4", 00:15:58.528 "bdev_name": "nvme2n1" 00:15:58.528 }, 00:15:58.528 { 00:15:58.528 "nbd_device": "/dev/nbd5", 00:15:58.528 "bdev_name": "nvme3n1" 00:15:58.528 } 00:15:58.528 ]' 00:15:58.528 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:58.786 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:58.786 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.787 10:15:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.045 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.302 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.559 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.827 10:15:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.107 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:00.364 /dev/nbd0 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.364 1+0 records in 00:16:00.364 1+0 records out 00:16:00.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473609 s, 8.6 MB/s 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.364 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:00.622 /dev/nbd1 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.622 1+0 records in 00:16:00.622 1+0 records out 00:16:00.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744146 s, 5.5 MB/s 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.622 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:00.881 /dev/nbd10 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.881 1+0 records in 00:16:00.881 1+0 records out 00:16:00.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294511 s, 13.9 MB/s 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.881 10:15:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:01.139 /dev/nbd11 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.139 1+0 records in 00:16:01.139 1+0 records out 00:16:01.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511097 s, 8.0 MB/s 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.139 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:01.397 /dev/nbd12 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.397 1+0 records in 00:16:01.397 1+0 records out 00:16:01.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063219 s, 6.5 MB/s 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.397 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:01.656 /dev/nbd13 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.656 1+0 records in 00:16:01.656 1+0 records out 00:16:01.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403374 s, 10.2 MB/s 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.656 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd0", 00:16:01.915 "bdev_name": "nvme0n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd1", 00:16:01.915 "bdev_name": "nvme0n2" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd10", 00:16:01.915 "bdev_name": "nvme0n3" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd11", 00:16:01.915 "bdev_name": "nvme1n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd12", 00:16:01.915 "bdev_name": "nvme2n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd13", 00:16:01.915 "bdev_name": "nvme3n1" 00:16:01.915 } 00:16:01.915 ]' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd0", 00:16:01.915 "bdev_name": "nvme0n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd1", 00:16:01.915 "bdev_name": "nvme0n2" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd10", 00:16:01.915 "bdev_name": "nvme0n3" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd11", 00:16:01.915 "bdev_name": "nvme1n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd12", 00:16:01.915 "bdev_name": "nvme2n1" 00:16:01.915 }, 00:16:01.915 { 00:16:01.915 "nbd_device": "/dev/nbd13", 00:16:01.915 "bdev_name": "nvme3n1" 00:16:01.915 } 00:16:01.915 ]' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:01.915 /dev/nbd1 00:16:01.915 /dev/nbd10 00:16:01.915 /dev/nbd11 00:16:01.915 /dev/nbd12 00:16:01.915 /dev/nbd13' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:01.915 /dev/nbd1 00:16:01.915 /dev/nbd10 00:16:01.915 /dev/nbd11 00:16:01.915 /dev/nbd12 00:16:01.915 /dev/nbd13' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:01.915 256+0 records in 00:16:01.915 256+0 records out 00:16:01.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670466 s, 156 MB/s 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:01.915 256+0 records in 00:16:01.915 256+0 records out 00:16:01.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0697052 s, 15.0 MB/s 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.915 10:15:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:01.915 256+0 records in 00:16:01.915 256+0 records out 00:16:01.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0663411 s, 15.8 MB/s 00:16:01.915 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.915 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:02.174 256+0 records in 00:16:02.174 256+0 records out 00:16:02.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667548 s, 15.7 MB/s 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:02.174 256+0 records in 00:16:02.174 256+0 records out 00:16:02.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0715942 s, 14.6 MB/s 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:02.174 256+0 records in 00:16:02.174 256+0 records out 00:16:02.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0771061 s, 13.6 MB/s 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:02.174 256+0 records in 00:16:02.174 256+0 records out 00:16:02.174 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0757337 s, 13.8 MB/s 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.174 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.432 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.690 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:02.948 10:15:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.948 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.205 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.462 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:03.719 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:03.976 10:15:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:03.976 malloc_lvol_verify 00:16:03.976 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:04.234 7b930951-5d91-4ebe-83c6-eef088d7e5ff 00:16:04.234 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:04.490 f574e1d6-d56c-492f-bd48-b92f67a00a5a 00:16:04.490 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:04.748 /dev/nbd0 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:04.748 mke2fs 1.47.0 (5-Feb-2023) 00:16:04.748 Discarding device blocks: 0/4096 done 00:16:04.748 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:04.748 00:16:04.748 Allocating group tables: 0/1 done 00:16:04.748 Writing inode tables: 0/1 done 00:16:04.748 Creating journal (1024 blocks): done 00:16:04.748 Writing superblocks and filesystem accounting information: 0/1 done 00:16:04.748 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.748 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72537 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72537 ']' 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72537 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72537 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:05.005 killing process with pid 72537 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72537' 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72537 00:16:05.005 10:15:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72537 00:16:05.568 10:15:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:05.568 00:16:05.568 real 0m9.426s 00:16:05.568 user 0m13.570s 00:16:05.568 sys 0m3.019s 00:16:05.568 ************************************ 00:16:05.568 END TEST bdev_nbd 00:16:05.568 ************************************ 00:16:05.568 10:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.568 10:15:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:05.568 10:15:11 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:05.568 10:15:11 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:05.568 10:15:11 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:05.568 10:15:11 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:05.568 10:15:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.568 10:15:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.568 10:15:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.568 ************************************ 00:16:05.568 START TEST bdev_fio 00:16:05.568 ************************************ 00:16:05.568 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:05.568 ************************************ 00:16:05.568 START TEST bdev_fio_rw_verify 00:16:05.568 ************************************ 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:05.568 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:05.569 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:05.569 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:05.569 10:15:11 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:05.825 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.825 fio-3.35 00:16:05.825 Starting 6 threads 00:16:18.062 00:16:18.062 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72937: Fri Dec 6 10:15:22 2024 00:16:18.062 read: IOPS=25.8k, BW=101MiB/s (106MB/s)(1008MiB/10001msec) 00:16:18.062 slat (usec): min=2, max=3361, avg= 5.21, stdev=12.35 00:16:18.062 clat (usec): min=74, max=6989, avg=695.67, stdev=615.78 00:16:18.062 lat (usec): min=77, max=7016, avg=700.89, stdev=616.53 00:16:18.062 clat percentiles (usec): 00:16:18.062 | 50.000th=[ 465], 99.000th=[ 2933], 99.900th=[ 4047], 99.990th=[ 5014], 00:16:18.062 | 99.999th=[ 6980] 00:16:18.062 write: IOPS=26.2k, BW=103MiB/s (108MB/s)(1026MiB/10001msec); 0 zone resets 00:16:18.062 slat (usec): min=10, max=3747, avg=29.99, stdev=95.24 00:16:18.062 clat (usec): min=50, max=8092, avg=889.91, stdev=699.57 00:16:18.062 lat (usec): min=68, max=8143, avg=919.89, stdev=712.89 00:16:18.062 clat percentiles (usec): 00:16:18.062 | 50.000th=[ 635], 99.000th=[ 3326], 99.900th=[ 4752], 99.990th=[ 5735], 00:16:18.062 | 99.999th=[ 6652] 00:16:18.062 bw ( KiB/s): min=53206, max=191814, per=100.00%, avg=106875.89, stdev=6874.81, samples=114 00:16:18.062 iops : min=13298, max=47953, avg=26717.74, stdev=1718.79, samples=114 00:16:18.062 lat (usec) : 100=0.10%, 250=12.39%, 500=31.91%, 750=20.03%, 1000=9.67% 00:16:18.062 lat (msec) : 2=19.31%, 4=6.36%, 10=0.22% 00:16:18.062 cpu : usr=43.82%, sys=32.52%, ctx=7673, majf=0, minf=22583 00:16:18.062 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.062 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.062 issued rwts: total=258156,262536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.062 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:18.062 00:16:18.062 Run status group 0 (all jobs): 00:16:18.062 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1008MiB (1057MB), run=10001-10001msec 00:16:18.062 WRITE: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=1026MiB (1075MB), run=10001-10001msec 00:16:18.062 ----------------------------------------------------- 00:16:18.062 Suppressions used: 00:16:18.062 count bytes template 00:16:18.062 6 48 /usr/src/fio/parse.c 00:16:18.062 4208 403968 /usr/src/fio/iolog.c 00:16:18.062 1 8 libtcmalloc_minimal.so 00:16:18.062 1 904 libcrypto.so 00:16:18.062 ----------------------------------------------------- 00:16:18.062 00:16:18.062 00:16:18.062 real 0m11.763s 00:16:18.062 user 0m27.657s 00:16:18.062 sys 0m19.778s 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:18.062 ************************************ 00:16:18.062 END TEST bdev_fio_rw_verify 00:16:18.062 ************************************ 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "68b6a75b-2571-4b5e-9be6-35a6ddcb0592"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "68b6a75b-2571-4b5e-9be6-35a6ddcb0592",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "6aa42ffe-44e2-4ec5-9125-bd8a36ce994d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6aa42ffe-44e2-4ec5-9125-bd8a36ce994d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "ae679d5b-52ef-4174-854a-f2d01ab5c488"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ae679d5b-52ef-4174-854a-f2d01ab5c488",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "80893f48-4266-4341-a4c7-39de77db2385"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80893f48-4266-4341-a4c7-39de77db2385",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "fa3251f0-5b39-41fd-9221-23e8ca291865"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa3251f0-5b39-41fd-9221-23e8ca291865",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69df9585-0070-4859-922d-d802ec373404"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "69df9585-0070-4859-922d-d802ec373404",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:18.062 /home/vagrant/spdk_repo/spdk 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:18.062 00:16:18.062 real 0m11.905s 00:16:18.062 user 0m27.734s 00:16:18.062 sys 0m19.840s 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.062 10:15:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:18.062 ************************************ 00:16:18.062 END TEST bdev_fio 00:16:18.062 ************************************ 00:16:18.062 10:15:23 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:18.062 10:15:23 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:18.062 10:15:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:18.062 10:15:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.062 10:15:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:18.062 ************************************ 00:16:18.062 START TEST bdev_verify 00:16:18.062 ************************************ 00:16:18.062 10:15:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:18.062 [2024-12-06 10:15:23.611378] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:18.062 [2024-12-06 10:15:23.611501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73109 ] 00:16:18.062 [2024-12-06 10:15:23.774630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:18.062 [2024-12-06 10:15:23.868369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.062 [2024-12-06 10:15:23.868457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.321 Running I/O for 5 seconds... 00:16:20.626 21632.00 IOPS, 84.50 MiB/s [2024-12-06T10:15:27.735Z] 22384.00 IOPS, 87.44 MiB/s [2024-12-06T10:15:28.669Z] 22954.67 IOPS, 89.67 MiB/s [2024-12-06T10:15:29.601Z] 22496.00 IOPS, 87.88 MiB/s [2024-12-06T10:15:29.601Z] 22566.40 IOPS, 88.15 MiB/s 00:16:23.434 Latency(us) 00:16:23.434 [2024-12-06T10:15:29.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.434 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0x80000 00:16:23.434 nvme0n1 : 5.05 1749.30 6.83 0.00 0.00 73032.43 7561.85 73803.62 00:16:23.434 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x80000 length 0x80000 00:16:23.434 nvme0n1 : 5.07 1666.14 6.51 0.00 0.00 76665.11 9175.04 84692.68 00:16:23.434 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0x80000 00:16:23.434 nvme0n2 : 5.05 1723.48 6.73 0.00 0.00 73980.29 4411.08 67350.84 00:16:23.434 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x80000 length 0x80000 00:16:23.434 nvme0n2 : 5.07 1665.67 6.51 0.00 0.00 76521.92 5671.38 69367.34 00:16:23.434 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0x80000 00:16:23.434 nvme0n3 : 5.06 1721.00 6.72 0.00 0.00 73939.85 11947.72 67350.84 00:16:23.434 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x80000 length 0x80000 00:16:23.434 nvme0n3 : 5.07 1665.22 6.50 0.00 0.00 76366.21 6805.66 76223.41 00:16:23.434 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0x20000 00:16:23.434 nvme1n1 : 5.05 1724.90 6.74 0.00 0.00 73628.69 7057.72 72190.42 00:16:23.434 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x20000 length 0x20000 00:16:23.434 nvme1n1 : 5.06 1668.76 6.52 0.00 0.00 76025.72 8620.50 76223.41 00:16:23.434 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0xbd0bd 00:16:23.434 nvme2n1 : 5.07 2760.66 10.78 0.00 0.00 45862.56 2671.85 65737.65 00:16:23.434 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:23.434 nvme2n1 : 5.08 2639.32 10.31 0.00 0.00 47887.16 3881.75 72190.42 00:16:23.434 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0x0 length 0xa0000 00:16:23.434 nvme3n1 : 5.07 1767.41 6.90 0.00 0.00 71428.34 1651.00 67350.84 00:16:23.434 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.434 Verification LBA range: start 0xa0000 length 0xa0000 00:16:23.434 nvme3n1 : 5.08 1711.75 6.69 0.00 0.00 73755.44 3138.17 83886.08 00:16:23.434 [2024-12-06T10:15:29.601Z] =================================================================================================================== 00:16:23.434 [2024-12-06T10:15:29.601Z] Total : 22463.60 87.75 0.00 0.00 67845.65 1651.00 84692.68 00:16:23.999 00:16:23.999 real 0m6.540s 00:16:23.999 user 0m10.624s 00:16:23.999 sys 0m1.489s 00:16:23.999 10:15:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.999 ************************************ 00:16:23.999 END TEST bdev_verify 00:16:23.999 ************************************ 00:16:23.999 10:15:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:23.999 10:15:30 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:23.999 10:15:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:23.999 10:15:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.999 10:15:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.999 ************************************ 00:16:23.999 START TEST bdev_verify_big_io 00:16:23.999 ************************************ 00:16:23.999 10:15:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:24.256 [2024-12-06 10:15:30.217071] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:24.256 [2024-12-06 10:15:30.217189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:16:24.256 [2024-12-06 10:15:30.376084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:24.514 [2024-12-06 10:15:30.472812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.514 [2024-12-06 10:15:30.472914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.771 Running I/O for 5 seconds... 00:16:31.357 384.00 IOPS, 24.00 MiB/s [2024-12-06T10:15:37.524Z] 2704.00 IOPS, 169.00 MiB/s 00:16:31.357 Latency(us) 00:16:31.357 [2024-12-06T10:15:37.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.357 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x0 length 0x8000 00:16:31.357 nvme0n1 : 5.75 111.39 6.96 0.00 0.00 1101493.17 183097.50 1180857.90 00:16:31.357 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x8000 length 0x8000 00:16:31.357 nvme0n1 : 5.82 126.46 7.90 0.00 0.00 964384.30 33070.47 1361535.61 00:16:31.357 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x0 length 0x8000 00:16:31.357 nvme0n2 : 5.75 125.27 7.83 0.00 0.00 948865.22 87112.47 980821.86 00:16:31.357 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x8000 length 0x8000 00:16:31.357 nvme0n2 : 6.01 82.55 5.16 0.00 0.00 1418716.17 173418.34 2051982.57 00:16:31.357 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x0 length 0x8000 00:16:31.357 nvme0n3 : 6.01 138.43 8.65 0.00 0.00 825599.45 196809.65 777559.43 00:16:31.357 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.357 Verification LBA range: start 0x8000 length 0x8000 00:16:31.357 nvme0n3 : 5.82 107.17 6.70 0.00 0.00 1073760.83 75820.11 1348630.06 00:16:31.357 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0x0 length 0x2000 00:16:31.358 nvme1n1 : 6.02 138.19 8.64 0.00 0.00 808071.97 57671.68 1129235.69 00:16:31.358 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0x2000 length 0x2000 00:16:31.358 nvme1n1 : 6.02 124.90 7.81 0.00 0.00 882818.49 6856.07 1438968.91 00:16:31.358 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0x0 length 0xbd0b 00:16:31.358 nvme2n1 : 6.03 169.85 10.62 0.00 0.00 639451.84 4259.84 890483.00 00:16:31.358 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:31.358 nvme2n1 : 6.03 148.62 9.29 0.00 0.00 729965.12 3188.58 1561571.64 00:16:31.358 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0x0 length 0xa000 00:16:31.358 nvme3n1 : 6.03 114.16 7.13 0.00 0.00 917965.70 7108.14 2374621.34 00:16:31.358 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:31.358 Verification LBA range: start 0xa000 length 0xa000 00:16:31.358 nvme3n1 : 6.03 114.04 7.13 0.00 0.00 915805.13 5368.91 1948738.17 00:16:31.358 [2024-12-06T10:15:37.525Z] =================================================================================================================== 00:16:31.358 [2024-12-06T10:15:37.525Z] Total : 1501.03 93.81 0.00 0.00 903361.48 3188.58 2374621.34 00:16:31.930 00:16:31.930 real 0m7.642s 00:16:31.930 user 0m14.215s 00:16:31.930 sys 0m0.350s 00:16:31.930 10:15:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.930 ************************************ 00:16:31.930 10:15:37 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.930 END TEST bdev_verify_big_io 00:16:31.930 ************************************ 00:16:31.930 10:15:37 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.930 10:15:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:31.930 10:15:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.930 10:15:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:31.930 ************************************ 00:16:31.930 START TEST bdev_write_zeroes 00:16:31.930 ************************************ 00:16:31.930 10:15:37 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:31.930 [2024-12-06 10:15:37.901442] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:31.930 [2024-12-06 10:15:37.901560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73310 ] 00:16:31.930 [2024-12-06 10:15:38.050554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.190 [2024-12-06 10:15:38.125750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.449 Running I/O for 1 seconds... 00:16:33.390 75104.00 IOPS, 293.38 MiB/s 00:16:33.390 Latency(us) 00:16:33.390 [2024-12-06T10:15:39.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.390 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme0n1 : 1.02 10835.82 42.33 0.00 0.00 11801.83 7914.73 22282.24 00:16:33.390 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme0n2 : 1.02 10823.01 42.28 0.00 0.00 11807.16 7864.32 22584.71 00:16:33.390 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme0n3 : 1.02 10810.40 42.23 0.00 0.00 11812.68 7763.50 22887.19 00:16:33.390 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme1n1 : 1.02 10866.29 42.45 0.00 0.00 11743.70 6326.74 21979.77 00:16:33.390 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme2n1 : 1.02 20500.60 80.08 0.00 0.00 6217.62 3201.18 15829.46 00:16:33.390 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:33.390 nvme3n1 : 1.02 10754.10 42.01 0.00 0.00 11796.15 5948.65 24197.91 00:16:33.390 [2024-12-06T10:15:39.557Z] =================================================================================================================== 00:16:33.390 [2024-12-06T10:15:39.557Z] Total : 74590.21 291.37 0.00 0.00 10257.37 3201.18 24197.91 00:16:34.372 00:16:34.372 real 0m2.340s 00:16:34.372 user 0m1.635s 00:16:34.372 sys 0m0.540s 00:16:34.372 10:15:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.372 10:15:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:34.372 ************************************ 00:16:34.372 END TEST bdev_write_zeroes 00:16:34.372 ************************************ 00:16:34.372 10:15:40 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.372 10:15:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:34.372 10:15:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.372 10:15:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.372 ************************************ 00:16:34.372 START TEST bdev_json_nonenclosed 00:16:34.372 ************************************ 00:16:34.372 10:15:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.372 [2024-12-06 10:15:40.281304] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:34.372 [2024-12-06 10:15:40.281416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73363 ] 00:16:34.372 [2024-12-06 10:15:40.442133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.632 [2024-12-06 10:15:40.537987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.632 [2024-12-06 10:15:40.538067] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:34.632 [2024-12-06 10:15:40.538085] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:34.632 [2024-12-06 10:15:40.538094] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:34.632 00:16:34.632 real 0m0.492s 00:16:34.632 user 0m0.298s 00:16:34.632 sys 0m0.089s 00:16:34.632 10:15:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.633 10:15:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:34.633 ************************************ 00:16:34.633 END TEST bdev_json_nonenclosed 00:16:34.633 ************************************ 00:16:34.633 10:15:40 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.633 10:15:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:34.633 10:15:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.633 10:15:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.633 ************************************ 00:16:34.633 START TEST bdev_json_nonarray 00:16:34.633 ************************************ 00:16:34.633 10:15:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.893 [2024-12-06 10:15:40.820276] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:34.893 [2024-12-06 10:15:40.820385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73383 ] 00:16:34.893 [2024-12-06 10:15:40.979587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.154 [2024-12-06 10:15:41.072673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.154 [2024-12-06 10:15:41.072756] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:35.154 [2024-12-06 10:15:41.072773] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:35.154 [2024-12-06 10:15:41.072781] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:35.154 00:16:35.154 real 0m0.489s 00:16:35.154 user 0m0.296s 00:16:35.154 sys 0m0.088s 00:16:35.154 10:15:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.154 10:15:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:35.154 ************************************ 00:16:35.154 END TEST bdev_json_nonarray 00:16:35.154 ************************************ 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:35.154 10:15:41 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.725 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:50.613 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.797 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.797 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.797 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:54.797 00:16:54.797 real 1m5.884s 00:16:54.797 user 1m20.965s 00:16:54.797 sys 1m8.729s 00:16:54.797 10:16:00 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.797 10:16:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 ************************************ 00:16:54.797 END TEST blockdev_xnvme 00:16:54.797 ************************************ 00:16:54.797 10:16:00 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:54.797 10:16:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.797 10:16:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.797 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 ************************************ 00:16:54.797 START TEST ublk 00:16:54.797 ************************************ 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:54.797 * Looking for test storage... 00:16:54.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.797 10:16:00 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.797 10:16:00 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.797 10:16:00 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.797 10:16:00 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.797 10:16:00 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.797 10:16:00 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:54.797 10:16:00 ublk -- scripts/common.sh@345 -- # : 1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.797 10:16:00 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.797 10:16:00 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@353 -- # local d=1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.797 10:16:00 ublk -- scripts/common.sh@355 -- # echo 1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.797 10:16:00 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@353 -- # local d=2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.797 10:16:00 ublk -- scripts/common.sh@355 -- # echo 2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.797 10:16:00 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.797 10:16:00 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.797 10:16:00 ublk -- scripts/common.sh@368 -- # return 0 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:54.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.797 --rc genhtml_branch_coverage=1 00:16:54.797 --rc genhtml_function_coverage=1 00:16:54.797 --rc genhtml_legend=1 00:16:54.797 --rc geninfo_all_blocks=1 00:16:54.797 --rc geninfo_unexecuted_blocks=1 00:16:54.797 00:16:54.797 ' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:54.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.797 --rc genhtml_branch_coverage=1 00:16:54.797 --rc genhtml_function_coverage=1 00:16:54.797 --rc genhtml_legend=1 00:16:54.797 --rc geninfo_all_blocks=1 00:16:54.797 --rc geninfo_unexecuted_blocks=1 00:16:54.797 00:16:54.797 ' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:54.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.797 --rc genhtml_branch_coverage=1 00:16:54.797 --rc genhtml_function_coverage=1 00:16:54.797 --rc genhtml_legend=1 00:16:54.797 --rc geninfo_all_blocks=1 00:16:54.797 --rc geninfo_unexecuted_blocks=1 00:16:54.797 00:16:54.797 ' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:54.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.797 --rc genhtml_branch_coverage=1 00:16:54.797 --rc genhtml_function_coverage=1 00:16:54.797 --rc genhtml_legend=1 00:16:54.797 --rc geninfo_all_blocks=1 00:16:54.797 --rc geninfo_unexecuted_blocks=1 00:16:54.797 00:16:54.797 ' 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:54.797 10:16:00 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:54.797 10:16:00 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:54.797 10:16:00 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:54.797 10:16:00 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:54.797 10:16:00 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:54.797 10:16:00 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:54.797 10:16:00 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:54.797 10:16:00 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:54.797 10:16:00 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.797 10:16:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.797 ************************************ 00:16:54.797 START TEST test_save_ublk_config 00:16:54.797 ************************************ 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73694 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73694 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73694 ']' 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:54.797 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.798 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.798 10:16:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:54.798 [2024-12-06 10:16:00.646919] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:54.798 [2024-12-06 10:16:00.647042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73694 ] 00:16:54.798 [2024-12-06 10:16:00.804237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.798 [2024-12-06 10:16:00.898683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.427 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:55.427 [2024-12-06 10:16:01.500467] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:55.427 [2024-12-06 10:16:01.501235] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:55.427 malloc0 00:16:55.427 [2024-12-06 10:16:01.556876] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:55.427 [2024-12-06 10:16:01.556947] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:55.427 [2024-12-06 10:16:01.556957] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:55.427 [2024-12-06 10:16:01.556964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:55.427 [2024-12-06 10:16:01.565531] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:55.427 [2024-12-06 10:16:01.565551] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:55.427 [2024-12-06 10:16:01.572474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:55.427 [2024-12-06 10:16:01.572562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:55.427 [2024-12-06 10:16:01.589477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:55.685 0 00:16:55.685 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.685 10:16:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:55.685 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.685 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:55.945 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.945 10:16:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:55.945 "subsystems": [ 00:16:55.945 { 00:16:55.945 "subsystem": "fsdev", 00:16:55.945 "config": [ 00:16:55.945 { 00:16:55.945 "method": "fsdev_set_opts", 00:16:55.945 "params": { 00:16:55.945 "fsdev_io_pool_size": 65535, 00:16:55.945 "fsdev_io_cache_size": 256 00:16:55.945 } 00:16:55.945 } 00:16:55.945 ] 00:16:55.945 }, 00:16:55.945 { 00:16:55.945 "subsystem": "keyring", 00:16:55.945 "config": [] 00:16:55.945 }, 00:16:55.945 { 00:16:55.945 "subsystem": "iobuf", 00:16:55.945 "config": [ 00:16:55.945 { 00:16:55.945 "method": "iobuf_set_options", 00:16:55.945 "params": { 00:16:55.945 "small_pool_count": 8192, 00:16:55.945 "large_pool_count": 1024, 00:16:55.945 "small_bufsize": 8192, 00:16:55.945 "large_bufsize": 135168, 00:16:55.945 "enable_numa": false 00:16:55.945 } 00:16:55.945 } 00:16:55.945 ] 00:16:55.945 }, 00:16:55.945 { 00:16:55.945 "subsystem": "sock", 00:16:55.945 "config": [ 00:16:55.945 { 00:16:55.945 "method": "sock_set_default_impl", 00:16:55.945 "params": { 00:16:55.945 "impl_name": "posix" 00:16:55.945 } 00:16:55.945 }, 00:16:55.945 { 00:16:55.945 "method": "sock_impl_set_options", 00:16:55.945 "params": { 00:16:55.945 "impl_name": "ssl", 00:16:55.945 "recv_buf_size": 4096, 00:16:55.945 "send_buf_size": 4096, 00:16:55.945 "enable_recv_pipe": true, 00:16:55.945 "enable_quickack": false, 00:16:55.946 "enable_placement_id": 0, 00:16:55.946 "enable_zerocopy_send_server": true, 00:16:55.946 "enable_zerocopy_send_client": false, 00:16:55.946 "zerocopy_threshold": 0, 00:16:55.946 "tls_version": 0, 00:16:55.946 "enable_ktls": false 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "sock_impl_set_options", 00:16:55.946 "params": { 00:16:55.946 "impl_name": "posix", 00:16:55.946 "recv_buf_size": 2097152, 00:16:55.946 "send_buf_size": 2097152, 00:16:55.946 "enable_recv_pipe": true, 00:16:55.946 "enable_quickack": false, 00:16:55.946 "enable_placement_id": 0, 00:16:55.946 "enable_zerocopy_send_server": true, 00:16:55.946 "enable_zerocopy_send_client": false, 00:16:55.946 "zerocopy_threshold": 0, 00:16:55.946 "tls_version": 0, 00:16:55.946 "enable_ktls": false 00:16:55.946 } 00:16:55.946 } 00:16:55.946 ] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "vmd", 00:16:55.946 "config": [] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "accel", 00:16:55.946 "config": [ 00:16:55.946 { 00:16:55.946 "method": "accel_set_options", 00:16:55.946 "params": { 00:16:55.946 "small_cache_size": 128, 00:16:55.946 "large_cache_size": 16, 00:16:55.946 "task_count": 2048, 00:16:55.946 "sequence_count": 2048, 00:16:55.946 "buf_count": 2048 00:16:55.946 } 00:16:55.946 } 00:16:55.946 ] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "bdev", 00:16:55.946 "config": [ 00:16:55.946 { 00:16:55.946 "method": "bdev_set_options", 00:16:55.946 "params": { 00:16:55.946 "bdev_io_pool_size": 65535, 00:16:55.946 "bdev_io_cache_size": 256, 00:16:55.946 "bdev_auto_examine": true, 00:16:55.946 "iobuf_small_cache_size": 128, 00:16:55.946 "iobuf_large_cache_size": 16 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_raid_set_options", 00:16:55.946 "params": { 00:16:55.946 "process_window_size_kb": 1024, 00:16:55.946 "process_max_bandwidth_mb_sec": 0 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_iscsi_set_options", 00:16:55.946 "params": { 00:16:55.946 "timeout_sec": 30 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_nvme_set_options", 00:16:55.946 "params": { 00:16:55.946 "action_on_timeout": "none", 00:16:55.946 "timeout_us": 0, 00:16:55.946 "timeout_admin_us": 0, 00:16:55.946 "keep_alive_timeout_ms": 10000, 00:16:55.946 "arbitration_burst": 0, 00:16:55.946 "low_priority_weight": 0, 00:16:55.946 "medium_priority_weight": 0, 00:16:55.946 "high_priority_weight": 0, 00:16:55.946 "nvme_adminq_poll_period_us": 10000, 00:16:55.946 "nvme_ioq_poll_period_us": 0, 00:16:55.946 "io_queue_requests": 0, 00:16:55.946 "delay_cmd_submit": true, 00:16:55.946 "transport_retry_count": 4, 00:16:55.946 "bdev_retry_count": 3, 00:16:55.946 "transport_ack_timeout": 0, 00:16:55.946 "ctrlr_loss_timeout_sec": 0, 00:16:55.946 "reconnect_delay_sec": 0, 00:16:55.946 "fast_io_fail_timeout_sec": 0, 00:16:55.946 "disable_auto_failback": false, 00:16:55.946 "generate_uuids": false, 00:16:55.946 "transport_tos": 0, 00:16:55.946 "nvme_error_stat": false, 00:16:55.946 "rdma_srq_size": 0, 00:16:55.946 "io_path_stat": false, 00:16:55.946 "allow_accel_sequence": false, 00:16:55.946 "rdma_max_cq_size": 0, 00:16:55.946 "rdma_cm_event_timeout_ms": 0, 00:16:55.946 "dhchap_digests": [ 00:16:55.946 "sha256", 00:16:55.946 "sha384", 00:16:55.946 "sha512" 00:16:55.946 ], 00:16:55.946 "dhchap_dhgroups": [ 00:16:55.946 "null", 00:16:55.946 "ffdhe2048", 00:16:55.946 "ffdhe3072", 00:16:55.946 "ffdhe4096", 00:16:55.946 "ffdhe6144", 00:16:55.946 "ffdhe8192" 00:16:55.946 ] 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_nvme_set_hotplug", 00:16:55.946 "params": { 00:16:55.946 "period_us": 100000, 00:16:55.946 "enable": false 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_malloc_create", 00:16:55.946 "params": { 00:16:55.946 "name": "malloc0", 00:16:55.946 "num_blocks": 8192, 00:16:55.946 "block_size": 4096, 00:16:55.946 "physical_block_size": 4096, 00:16:55.946 "uuid": "b41e3dc5-5ede-40c2-8e22-d410a4208053", 00:16:55.946 "optimal_io_boundary": 0, 00:16:55.946 "md_size": 0, 00:16:55.946 "dif_type": 0, 00:16:55.946 "dif_is_head_of_md": false, 00:16:55.946 "dif_pi_format": 0 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "bdev_wait_for_examine" 00:16:55.946 } 00:16:55.946 ] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "scsi", 00:16:55.946 "config": null 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "scheduler", 00:16:55.946 "config": [ 00:16:55.946 { 00:16:55.946 "method": "framework_set_scheduler", 00:16:55.946 "params": { 00:16:55.946 "name": "static" 00:16:55.946 } 00:16:55.946 } 00:16:55.946 ] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "vhost_scsi", 00:16:55.946 "config": [] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "vhost_blk", 00:16:55.946 "config": [] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "ublk", 00:16:55.946 "config": [ 00:16:55.946 { 00:16:55.946 "method": "ublk_create_target", 00:16:55.946 "params": { 00:16:55.946 "cpumask": "1" 00:16:55.946 } 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "method": "ublk_start_disk", 00:16:55.946 "params": { 00:16:55.946 "bdev_name": "malloc0", 00:16:55.946 "ublk_id": 0, 00:16:55.946 "num_queues": 1, 00:16:55.946 "queue_depth": 128 00:16:55.946 } 00:16:55.946 } 00:16:55.946 ] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "nbd", 00:16:55.946 "config": [] 00:16:55.946 }, 00:16:55.946 { 00:16:55.946 "subsystem": "nvmf", 00:16:55.946 "config": [ 00:16:55.946 { 00:16:55.946 "method": "nvmf_set_config", 00:16:55.946 "params": { 00:16:55.946 "discovery_filter": "match_any", 00:16:55.946 "admin_cmd_passthru": { 00:16:55.946 "identify_ctrlr": false 00:16:55.946 }, 00:16:55.946 "dhchap_digests": [ 00:16:55.946 "sha256", 00:16:55.946 "sha384", 00:16:55.946 "sha512" 00:16:55.946 ], 00:16:55.946 "dhchap_dhgroups": [ 00:16:55.946 "null", 00:16:55.946 "ffdhe2048", 00:16:55.946 "ffdhe3072", 00:16:55.946 "ffdhe4096", 00:16:55.946 "ffdhe6144", 00:16:55.946 "ffdhe8192" 00:16:55.946 ] 00:16:55.947 } 00:16:55.947 }, 00:16:55.947 { 00:16:55.947 "method": "nvmf_set_max_subsystems", 00:16:55.947 "params": { 00:16:55.947 "max_subsystems": 1024 00:16:55.947 } 00:16:55.947 }, 00:16:55.947 { 00:16:55.947 "method": "nvmf_set_crdt", 00:16:55.947 "params": { 00:16:55.947 "crdt1": 0, 00:16:55.947 "crdt2": 0, 00:16:55.947 "crdt3": 0 00:16:55.947 } 00:16:55.947 } 00:16:55.947 ] 00:16:55.947 }, 00:16:55.947 { 00:16:55.947 "subsystem": "iscsi", 00:16:55.947 "config": [ 00:16:55.947 { 00:16:55.947 "method": "iscsi_set_options", 00:16:55.947 "params": { 00:16:55.947 "node_base": "iqn.2016-06.io.spdk", 00:16:55.947 "max_sessions": 128, 00:16:55.947 "max_connections_per_session": 2, 00:16:55.947 "max_queue_depth": 64, 00:16:55.947 "default_time2wait": 2, 00:16:55.947 "default_time2retain": 20, 00:16:55.947 "first_burst_length": 8192, 00:16:55.947 "immediate_data": true, 00:16:55.947 "allow_duplicated_isid": false, 00:16:55.947 "error_recovery_level": 0, 00:16:55.947 "nop_timeout": 60, 00:16:55.947 "nop_in_interval": 30, 00:16:55.947 "disable_chap": false, 00:16:55.947 "require_chap": false, 00:16:55.947 "mutual_chap": false, 00:16:55.947 "chap_group": 0, 00:16:55.947 "max_large_datain_per_connection": 64, 00:16:55.947 "max_r2t_per_connection": 4, 00:16:55.947 "pdu_pool_size": 36864, 00:16:55.947 "immediate_data_pool_size": 16384, 00:16:55.947 "data_out_pool_size": 2048 00:16:55.947 } 00:16:55.947 } 00:16:55.947 ] 00:16:55.947 } 00:16:55.947 ] 00:16:55.947 }' 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73694 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73694 ']' 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73694 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73694 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.947 killing process with pid 73694 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73694' 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73694 00:16:55.947 10:16:01 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73694 00:16:57.330 [2024-12-06 10:16:03.243711] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:57.330 [2024-12-06 10:16:03.290534] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:57.330 [2024-12-06 10:16:03.290644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:57.330 [2024-12-06 10:16:03.297475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:57.330 [2024-12-06 10:16:03.297518] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:57.331 [2024-12-06 10:16:03.297525] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:57.331 [2024-12-06 10:16:03.297547] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:57.331 [2024-12-06 10:16:03.297657] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:58.719 10:16:04 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73752 00:16:58.719 10:16:04 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73752 00:16:58.719 10:16:04 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:58.719 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73752 ']' 00:16:58.719 10:16:04 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:58.719 "subsystems": [ 00:16:58.719 { 00:16:58.719 "subsystem": "fsdev", 00:16:58.719 "config": [ 00:16:58.719 { 00:16:58.719 "method": "fsdev_set_opts", 00:16:58.719 "params": { 00:16:58.719 "fsdev_io_pool_size": 65535, 00:16:58.719 "fsdev_io_cache_size": 256 00:16:58.719 } 00:16:58.719 } 00:16:58.719 ] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "keyring", 00:16:58.719 "config": [] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "iobuf", 00:16:58.719 "config": [ 00:16:58.719 { 00:16:58.719 "method": "iobuf_set_options", 00:16:58.719 "params": { 00:16:58.719 "small_pool_count": 8192, 00:16:58.719 "large_pool_count": 1024, 00:16:58.719 "small_bufsize": 8192, 00:16:58.719 "large_bufsize": 135168, 00:16:58.719 "enable_numa": false 00:16:58.719 } 00:16:58.719 } 00:16:58.719 ] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "sock", 00:16:58.719 "config": [ 00:16:58.719 { 00:16:58.719 "method": "sock_set_default_impl", 00:16:58.719 "params": { 00:16:58.719 "impl_name": "posix" 00:16:58.719 } 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "method": "sock_impl_set_options", 00:16:58.719 "params": { 00:16:58.719 "impl_name": "ssl", 00:16:58.719 "recv_buf_size": 4096, 00:16:58.719 "send_buf_size": 4096, 00:16:58.719 "enable_recv_pipe": true, 00:16:58.719 "enable_quickack": false, 00:16:58.719 "enable_placement_id": 0, 00:16:58.719 "enable_zerocopy_send_server": true, 00:16:58.719 "enable_zerocopy_send_client": false, 00:16:58.719 "zerocopy_threshold": 0, 00:16:58.719 "tls_version": 0, 00:16:58.719 "enable_ktls": false 00:16:58.719 } 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "method": "sock_impl_set_options", 00:16:58.719 "params": { 00:16:58.719 "impl_name": "posix", 00:16:58.719 "recv_buf_size": 2097152, 00:16:58.719 "send_buf_size": 2097152, 00:16:58.719 "enable_recv_pipe": true, 00:16:58.719 "enable_quickack": false, 00:16:58.719 "enable_placement_id": 0, 00:16:58.719 "enable_zerocopy_send_server": true, 00:16:58.719 "enable_zerocopy_send_client": false, 00:16:58.719 "zerocopy_threshold": 0, 00:16:58.719 "tls_version": 0, 00:16:58.719 "enable_ktls": false 00:16:58.719 } 00:16:58.719 } 00:16:58.719 ] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "vmd", 00:16:58.719 "config": [] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "accel", 00:16:58.719 "config": [ 00:16:58.719 { 00:16:58.719 "method": "accel_set_options", 00:16:58.719 "params": { 00:16:58.719 "small_cache_size": 128, 00:16:58.719 "large_cache_size": 16, 00:16:58.719 "task_count": 2048, 00:16:58.719 "sequence_count": 2048, 00:16:58.719 "buf_count": 2048 00:16:58.719 } 00:16:58.719 } 00:16:58.719 ] 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "subsystem": "bdev", 00:16:58.719 "config": [ 00:16:58.719 { 00:16:58.719 "method": "bdev_set_options", 00:16:58.719 "params": { 00:16:58.719 "bdev_io_pool_size": 65535, 00:16:58.719 "bdev_io_cache_size": 256, 00:16:58.719 "bdev_auto_examine": true, 00:16:58.719 "iobuf_small_cache_size": 128, 00:16:58.719 "iobuf_large_cache_size": 16 00:16:58.719 } 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "method": "bdev_raid_set_options", 00:16:58.719 "params": { 00:16:58.719 "process_window_size_kb": 1024, 00:16:58.719 "process_max_bandwidth_mb_sec": 0 00:16:58.719 } 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "method": "bdev_iscsi_set_options", 00:16:58.719 "params": { 00:16:58.719 "timeout_sec": 30 00:16:58.719 } 00:16:58.719 }, 00:16:58.719 { 00:16:58.719 "method": "bdev_nvme_set_options", 00:16:58.719 "params": { 00:16:58.719 "action_on_timeout": "none", 00:16:58.719 "timeout_us": 0, 00:16:58.719 "timeout_admin_us": 0, 00:16:58.719 "keep_alive_timeout_ms": 10000, 00:16:58.719 "arbitration_burst": 0, 00:16:58.719 "low_priority_weight": 0, 00:16:58.719 "medium_priority_weight": 0, 00:16:58.719 "high_priority_weight": 0, 00:16:58.719 "nvme_adminq_poll_period_us": 10000, 00:16:58.719 "nvme_ioq_poll_period_us": 0, 00:16:58.719 "io_queue_requests": 0, 00:16:58.719 "delay_cmd_submit": true, 00:16:58.719 "transport_retry_count": 4, 00:16:58.719 "bdev_retry_count": 3, 00:16:58.719 "transport_ack_timeout": 0, 00:16:58.719 "ctrlr_loss_timeout_sec": 0, 00:16:58.719 "reconnect_delay_sec": 0, 00:16:58.719 "fast_io_fail_timeout_sec": 0, 00:16:58.719 "disable_auto_failback": false, 00:16:58.720 "generate_uuids": false, 00:16:58.720 "transport_tos": 0, 00:16:58.720 "nvme_error_stat": false, 00:16:58.720 "rdma_srq_size": 0, 00:16:58.720 "io_path_stat": false, 00:16:58.720 "allow_accel_sequence": false, 00:16:58.720 "rdma_max_cq_size": 0, 00:16:58.720 "rdma_cm_event_timeout_ms": 0, 00:16:58.720 "dhchap_digests": [ 00:16:58.720 "sha256", 00:16:58.720 "sha384", 00:16:58.720 "sha512" 00:16:58.720 ], 00:16:58.720 "dhchap_dhgroups": [ 00:16:58.720 "null", 00:16:58.720 "ffdhe2048", 00:16:58.720 "ffdhe3072", 00:16:58.720 "ffdhe4096", 00:16:58.720 "ffdhe6144", 00:16:58.720 "ffdhe8192" 00:16:58.720 ] 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "bdev_nvme_set_hotplug", 00:16:58.720 "params": { 00:16:58.720 "period_us": 100000, 00:16:58.720 "enable": false 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "bdev_malloc_create", 00:16:58.720 "params": { 00:16:58.720 "name": "malloc0", 00:16:58.720 "num_blocks": 8192, 00:16:58.720 "block_size": 4096, 00:16:58.720 "physical_block_size": 4096, 00:16:58.720 "uuid": "b41e3dc5-5ede-40c2-8e22-d410a4208053", 00:16:58.720 "optimal_io_boundary": 0, 00:16:58.720 "md_size": 0, 00:16:58.720 "dif_type": 0, 00:16:58.720 "dif_is_head_of_md": false, 00:16:58.720 "dif_pi_format": 0 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "bdev_wait_for_examine" 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "scsi", 00:16:58.720 "config": null 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "scheduler", 00:16:58.720 "config": [ 00:16:58.720 { 00:16:58.720 "method": "framework_set_scheduler", 00:16:58.720 "params": { 00:16:58.720 "name": "static" 00:16:58.720 } 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "vhost_scsi", 00:16:58.720 "config": [] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "vhost_blk", 00:16:58.720 "config": [] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "ublk", 00:16:58.720 "config": [ 00:16:58.720 { 00:16:58.720 "method": "ublk_create_target", 00:16:58.720 "params": { 00:16:58.720 "cpumask": "1" 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "ublk_start_disk", 00:16:58.720 "params": { 00:16:58.720 "bdev_name": "malloc0", 00:16:58.720 "ublk_id": 0, 00:16:58.720 "num_queues": 1, 00:16:58.720 "queue_depth": 128 00:16:58.720 } 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "nbd", 00:16:58.720 "config": [] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "nvmf", 00:16:58.720 "config": [ 00:16:58.720 { 00:16:58.720 "method": "nvmf_set_config", 00:16:58.720 "params": { 00:16:58.720 "discovery_filter": "match_any", 00:16:58.720 "admin_cmd_passthru": { 00:16:58.720 "identify_ctrlr": false 00:16:58.720 }, 00:16:58.720 "dhchap_digests": [ 00:16:58.720 "sha256", 00:16:58.720 "sha384", 00:16:58.720 "sha512" 00:16:58.720 ], 00:16:58.720 "dhchap_dhgroups": [ 00:16:58.720 "null", 00:16:58.720 "ffdhe2048", 00:16:58.720 "ffdhe3072", 00:16:58.720 "ffdhe4096", 00:16:58.720 "ffdhe6144", 00:16:58.720 "ffdhe8192" 00:16:58.720 ] 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "nvmf_set_max_subsystems", 00:16:58.720 "params": { 00:16:58.720 "max_subsystems": 1024 00:16:58.720 } 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "method": "nvmf_set_crdt", 00:16:58.720 "params": { 00:16:58.720 "crdt1": 0, 00:16:58.720 "crdt2": 0, 00:16:58.720 "crdt3": 0 00:16:58.720 } 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 }, 00:16:58.720 { 00:16:58.720 "subsystem": "iscsi", 00:16:58.720 "config": [ 00:16:58.720 { 00:16:58.720 "method": "iscsi_set_options", 00:16:58.720 "params": { 00:16:58.720 "node_base": "iqn.2016-06.io.spdk", 00:16:58.720 "max_sessions": 128, 00:16:58.720 "max_connections_per_session": 2, 00:16:58.720 "max_queue_depth": 64, 00:16:58.720 "default_time2wait": 2, 00:16:58.720 "default_time2retain": 20, 00:16:58.720 "first_burst_length": 8192, 00:16:58.720 "immediate_data": true, 00:16:58.720 "allow_duplicated_isid": false, 00:16:58.720 "error_recovery_level": 0, 00:16:58.720 "nop_timeout": 60, 00:16:58.720 "nop_in_interval": 30, 00:16:58.720 "disable_chap": false, 00:16:58.720 "require_chap": false, 00:16:58.720 "mutual_chap": false, 00:16:58.720 "chap_group": 0, 00:16:58.720 "max_large_datain_per_connection": 64, 00:16:58.720 "max_r2t_per_connection": 4, 00:16:58.720 "pdu_pool_size": 36864, 00:16:58.720 "immediate_data_pool_size": 16384, 00:16:58.720 "data_out_pool_size": 2048 00:16:58.720 } 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 } 00:16:58.720 ] 00:16:58.720 }' 00:16:58.720 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.720 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.720 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.720 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.720 10:16:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:58.720 [2024-12-06 10:16:04.657241] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:16:58.720 [2024-12-06 10:16:04.657779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73752 ] 00:16:58.720 [2024-12-06 10:16:04.815036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.980 [2024-12-06 10:16:04.941459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.923 [2024-12-06 10:16:05.796469] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.923 [2024-12-06 10:16:05.797376] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.923 [2024-12-06 10:16:05.804616] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:59.923 [2024-12-06 10:16:05.804710] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:59.923 [2024-12-06 10:16:05.804718] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:59.923 [2024-12-06 10:16:05.804726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.923 [2024-12-06 10:16:05.813570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.923 [2024-12-06 10:16:05.813598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.923 [2024-12-06 10:16:05.820480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.923 [2024-12-06 10:16:05.820599] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:59.923 [2024-12-06 10:16:05.837468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73752 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73752 ']' 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73752 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73752 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.923 killing process with pid 73752 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73752' 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73752 00:16:59.923 10:16:05 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73752 00:17:01.306 [2024-12-06 10:16:07.055336] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:01.306 [2024-12-06 10:16:07.096484] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:01.306 [2024-12-06 10:16:07.096577] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:01.306 [2024-12-06 10:16:07.104470] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:01.306 [2024-12-06 10:16:07.104510] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:01.306 [2024-12-06 10:16:07.104516] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:01.306 [2024-12-06 10:16:07.104534] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:01.306 [2024-12-06 10:16:07.104642] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:02.305 10:16:08 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:17:02.305 00:17:02.305 real 0m7.720s 00:17:02.305 user 0m5.061s 00:17:02.305 sys 0m3.281s 00:17:02.305 10:16:08 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.305 10:16:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:17:02.305 ************************************ 00:17:02.305 END TEST test_save_ublk_config 00:17:02.305 ************************************ 00:17:02.305 10:16:08 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73824 00:17:02.305 10:16:08 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.305 10:16:08 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:02.305 10:16:08 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73824 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@835 -- # '[' -z 73824 ']' 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.305 10:16:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.305 [2024-12-06 10:16:08.385681] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:17:02.305 [2024-12-06 10:16:08.385809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73824 ] 00:17:02.564 [2024-12-06 10:16:08.535082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.564 [2024-12-06 10:16:08.612781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.564 [2024-12-06 10:16:08.612867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.131 10:16:09 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.131 10:16:09 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:03.131 10:16:09 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:03.131 10:16:09 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:03.131 10:16:09 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.131 10:16:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.131 ************************************ 00:17:03.131 START TEST test_create_ublk 00:17:03.131 ************************************ 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:03.131 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.131 [2024-12-06 10:16:09.242463] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:03.131 [2024-12-06 10:16:09.243975] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.131 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:03.131 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.131 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.390 [2024-12-06 10:16:09.401564] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:03.390 [2024-12-06 10:16:09.401856] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:03.390 [2024-12-06 10:16:09.401871] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:03.390 [2024-12-06 10:16:09.401877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.390 [2024-12-06 10:16:09.410640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.390 [2024-12-06 10:16:09.410659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.390 [2024-12-06 10:16:09.417465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.390 [2024-12-06 10:16:09.417940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:03.390 [2024-12-06 10:16:09.439475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:03.390 10:16:09 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:03.390 { 00:17:03.390 "ublk_device": "/dev/ublkb0", 00:17:03.390 "id": 0, 00:17:03.390 "queue_depth": 512, 00:17:03.390 "num_queues": 4, 00:17:03.390 "bdev_name": "Malloc0" 00:17:03.390 } 00:17:03.390 ]' 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:03.390 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:03.649 10:16:09 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:03.649 10:16:09 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:03.649 fio: verification read phase will never start because write phase uses all of runtime 00:17:03.649 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:03.649 fio-3.35 00:17:03.649 Starting 1 process 00:17:15.858 00:17:15.858 fio_test: (groupid=0, jobs=1): err= 0: pid=73869: Fri Dec 6 10:16:19 2024 00:17:15.858 write: IOPS=16.4k, BW=64.2MiB/s (67.3MB/s)(642MiB/10001msec); 0 zone resets 00:17:15.858 clat (usec): min=35, max=8009, avg=60.09, stdev=200.76 00:17:15.858 lat (usec): min=35, max=8012, avg=60.52, stdev=200.78 00:17:15.858 clat percentiles (usec): 00:17:15.858 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 43], 00:17:15.858 | 30.00th=[ 45], 40.00th=[ 46], 50.00th=[ 48], 60.00th=[ 49], 00:17:15.858 | 70.00th=[ 50], 80.00th=[ 51], 90.00th=[ 55], 95.00th=[ 61], 00:17:15.858 | 99.00th=[ 73], 99.50th=[ 163], 99.90th=[ 3720], 99.95th=[ 3916], 00:17:15.858 | 99.99th=[ 4047] 00:17:15.858 bw ( KiB/s): min=32872, max=84056, per=99.01%, avg=65074.95, stdev=21757.37, samples=19 00:17:15.858 iops : min= 8218, max=21014, avg=16268.74, stdev=5439.34, samples=19 00:17:15.858 lat (usec) : 50=73.57%, 100=25.86%, 250=0.14%, 500=0.02%, 750=0.01% 00:17:15.858 lat (usec) : 1000=0.01% 00:17:15.858 lat (msec) : 2=0.04%, 4=0.31%, 10=0.02% 00:17:15.858 cpu : usr=3.01%, sys=12.13%, ctx=164333, majf=0, minf=795 00:17:15.858 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:15.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:15.858 issued rwts: total=0,164330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:15.858 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:15.858 00:17:15.858 Run status group 0 (all jobs): 00:17:15.858 WRITE: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=642MiB (673MB), run=10001-10001msec 00:17:15.858 00:17:15.858 Disk stats (read/write): 00:17:15.858 ublkb0: ios=0/162288, merge=0/0, ticks=0/8484, in_queue=8484, util=99.11% 00:17:15.858 10:16:19 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.858 [2024-12-06 10:16:19.858520] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:15.858 [2024-12-06 10:16:19.887884] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:15.858 [2024-12-06 10:16:19.888771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:15.858 [2024-12-06 10:16:19.895469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:15.858 [2024-12-06 10:16:19.895691] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:15.858 [2024-12-06 10:16:19.895704] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.858 10:16:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.858 [2024-12-06 10:16:19.911522] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:15.858 request: 00:17:15.858 { 00:17:15.858 "ublk_id": 0, 00:17:15.858 "method": "ublk_stop_disk", 00:17:15.858 "req_id": 1 00:17:15.858 } 00:17:15.858 Got JSON-RPC error response 00:17:15.858 response: 00:17:15.858 { 00:17:15.858 "code": -19, 00:17:15.858 "message": "No such device" 00:17:15.858 } 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.858 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.859 10:16:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:15.859 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:19.927521] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.859 [2024-12-06 10:16:19.931073] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:15.859 [2024-12-06 10:16:19.931101] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:15.859 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:15.859 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:15.859 10:16:20 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:15.859 00:17:15.859 real 0m11.148s 00:17:15.859 user 0m0.597s 00:17:15.859 sys 0m1.293s 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.859 ************************************ 00:17:15.859 END TEST test_create_ublk 00:17:15.859 ************************************ 00:17:15.859 10:16:20 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:15.859 10:16:20 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.859 10:16:20 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.859 10:16:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 ************************************ 00:17:15.859 START TEST test_create_multi_ublk 00:17:15.859 ************************************ 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:20.435460] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:15.859 [2024-12-06 10:16:20.436989] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:20.663561] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:15.859 [2024-12-06 10:16:20.663850] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:15.859 [2024-12-06 10:16:20.663863] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:15.859 [2024-12-06 10:16:20.663870] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:15.859 [2024-12-06 10:16:20.687468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:15.859 [2024-12-06 10:16:20.687488] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:15.859 [2024-12-06 10:16:20.711468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:15.859 [2024-12-06 10:16:20.711950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:15.859 [2024-12-06 10:16:20.736473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:20.942555] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:15.859 [2024-12-06 10:16:20.942841] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:15.859 [2024-12-06 10:16:20.942854] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:15.859 [2024-12-06 10:16:20.942859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:15.859 [2024-12-06 10:16:20.951622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:15.859 [2024-12-06 10:16:20.951637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:15.859 [2024-12-06 10:16:20.958472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:15.859 [2024-12-06 10:16:20.958967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:15.859 [2024-12-06 10:16:20.967491] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:21.129567] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:15.859 [2024-12-06 10:16:21.129869] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:15.859 [2024-12-06 10:16:21.129880] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:15.859 [2024-12-06 10:16:21.129887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:15.859 [2024-12-06 10:16:21.137476] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:15.859 [2024-12-06 10:16:21.137497] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:15.859 [2024-12-06 10:16:21.145465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:15.859 [2024-12-06 10:16:21.145972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:15.859 [2024-12-06 10:16:21.162460] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.859 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.859 [2024-12-06 10:16:21.321576] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:15.859 [2024-12-06 10:16:21.321877] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:15.859 [2024-12-06 10:16:21.321890] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:15.859 [2024-12-06 10:16:21.321895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:15.859 [2024-12-06 10:16:21.329483] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:15.859 [2024-12-06 10:16:21.329501] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:15.860 [2024-12-06 10:16:21.337478] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:15.860 [2024-12-06 10:16:21.337976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:15.860 [2024-12-06 10:16:21.341328] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:15.860 { 00:17:15.860 "ublk_device": "/dev/ublkb0", 00:17:15.860 "id": 0, 00:17:15.860 "queue_depth": 512, 00:17:15.860 "num_queues": 4, 00:17:15.860 "bdev_name": "Malloc0" 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "ublk_device": "/dev/ublkb1", 00:17:15.860 "id": 1, 00:17:15.860 "queue_depth": 512, 00:17:15.860 "num_queues": 4, 00:17:15.860 "bdev_name": "Malloc1" 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "ublk_device": "/dev/ublkb2", 00:17:15.860 "id": 2, 00:17:15.860 "queue_depth": 512, 00:17:15.860 "num_queues": 4, 00:17:15.860 "bdev_name": "Malloc2" 00:17:15.860 }, 00:17:15.860 { 00:17:15.860 "ublk_device": "/dev/ublkb3", 00:17:15.860 "id": 3, 00:17:15.860 "queue_depth": 512, 00:17:15.860 "num_queues": 4, 00:17:15.860 "bdev_name": "Malloc3" 00:17:15.860 } 00:17:15.860 ]' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:15.860 10:16:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.860 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.860 [2024-12-06 10:16:22.021550] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.118 [2024-12-06 10:16:22.059898] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.118 [2024-12-06 10:16:22.060849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.118 [2024-12-06 10:16:22.069473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.118 [2024-12-06 10:16:22.069707] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:16.118 [2024-12-06 10:16:22.069720] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.118 [2024-12-06 10:16:22.085548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.118 [2024-12-06 10:16:22.117500] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.118 [2024-12-06 10:16:22.118130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.118 [2024-12-06 10:16:22.125474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.118 [2024-12-06 10:16:22.125694] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:16.118 [2024-12-06 10:16:22.125707] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.118 [2024-12-06 10:16:22.141535] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.118 [2024-12-06 10:16:22.174881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.118 [2024-12-06 10:16:22.175798] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.118 [2024-12-06 10:16:22.181480] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.118 [2024-12-06 10:16:22.181697] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:16.118 [2024-12-06 10:16:22.181709] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.118 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.118 [2024-12-06 10:16:22.197527] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.118 [2024-12-06 10:16:22.235829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.118 [2024-12-06 10:16:22.236758] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.118 [2024-12-06 10:16:22.245469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.119 [2024-12-06 10:16:22.245677] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:16.119 [2024-12-06 10:16:22.245690] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:16.119 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.119 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:16.429 [2024-12-06 10:16:22.437520] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:16.429 [2024-12-06 10:16:22.441030] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:16.429 [2024-12-06 10:16:22.441059] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:16.429 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:16.429 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:16.430 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:16.430 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.430 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.688 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.688 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:16.688 10:16:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:16.688 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.688 10:16:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.255 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:17.514 00:17:17.514 real 0m3.214s 00:17:17.514 user 0m0.824s 00:17:17.514 sys 0m0.150s 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.514 ************************************ 00:17:17.514 END TEST test_create_multi_ublk 00:17:17.514 ************************************ 00:17:17.514 10:16:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:17.514 10:16:23 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:17.514 10:16:23 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:17.514 10:16:23 ublk -- ublk/ublk.sh@130 -- # killprocess 73824 00:17:17.514 10:16:23 ublk -- common/autotest_common.sh@954 -- # '[' -z 73824 ']' 00:17:17.514 10:16:23 ublk -- common/autotest_common.sh@958 -- # kill -0 73824 00:17:17.514 10:16:23 ublk -- common/autotest_common.sh@959 -- # uname 00:17:17.514 10:16:23 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.514 10:16:23 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73824 00:17:17.772 10:16:23 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.772 10:16:23 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.772 10:16:23 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73824' 00:17:17.772 killing process with pid 73824 00:17:17.772 10:16:23 ublk -- common/autotest_common.sh@973 -- # kill 73824 00:17:17.772 10:16:23 ublk -- common/autotest_common.sh@978 -- # wait 73824 00:17:18.337 [2024-12-06 10:16:24.214781] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:18.337 [2024-12-06 10:16:24.214829] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:18.904 ************************************ 00:17:18.904 END TEST ublk 00:17:18.904 ************************************ 00:17:18.904 00:17:18.904 real 0m24.457s 00:17:18.904 user 0m33.766s 00:17:18.904 sys 0m10.482s 00:17:18.904 10:16:24 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.904 10:16:24 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:18.904 10:16:24 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.904 10:16:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:18.904 10:16:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.904 10:16:24 -- common/autotest_common.sh@10 -- # set +x 00:17:18.904 ************************************ 00:17:18.904 START TEST ublk_recovery 00:17:18.904 ************************************ 00:17:18.904 10:16:24 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:18.904 * Looking for test storage... 00:17:18.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:18.904 10:16:24 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:18.904 10:16:24 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:18.904 10:16:24 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.904 10:16:25 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.904 --rc genhtml_branch_coverage=1 00:17:18.904 --rc genhtml_function_coverage=1 00:17:18.904 --rc genhtml_legend=1 00:17:18.904 --rc geninfo_all_blocks=1 00:17:18.904 --rc geninfo_unexecuted_blocks=1 00:17:18.904 00:17:18.904 ' 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.904 --rc genhtml_branch_coverage=1 00:17:18.904 --rc genhtml_function_coverage=1 00:17:18.904 --rc genhtml_legend=1 00:17:18.904 --rc geninfo_all_blocks=1 00:17:18.904 --rc geninfo_unexecuted_blocks=1 00:17:18.904 00:17:18.904 ' 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.904 --rc genhtml_branch_coverage=1 00:17:18.904 --rc genhtml_function_coverage=1 00:17:18.904 --rc genhtml_legend=1 00:17:18.904 --rc geninfo_all_blocks=1 00:17:18.904 --rc geninfo_unexecuted_blocks=1 00:17:18.904 00:17:18.904 ' 00:17:18.904 10:16:25 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.904 --rc genhtml_branch_coverage=1 00:17:18.904 --rc genhtml_function_coverage=1 00:17:18.904 --rc genhtml_legend=1 00:17:18.904 --rc geninfo_all_blocks=1 00:17:18.904 --rc geninfo_unexecuted_blocks=1 00:17:18.904 00:17:18.904 ' 00:17:18.904 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:18.904 10:16:25 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:18.904 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:18.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.904 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74216 00:17:18.904 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:18.905 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74216 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74216 ']' 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.905 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.905 10:16:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.164 [2024-12-06 10:16:25.104366] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:17:19.164 [2024-12-06 10:16:25.104504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74216 ] 00:17:19.164 [2024-12-06 10:16:25.259038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:19.421 [2024-12-06 10:16:25.337150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.421 [2024-12-06 10:16:25.337220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:19.987 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.987 [2024-12-06 10:16:25.855463] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:19.987 [2024-12-06 10:16:25.856955] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.987 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.987 malloc0 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.987 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.987 [2024-12-06 10:16:25.935564] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:19.987 [2024-12-06 10:16:25.935642] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:19.987 [2024-12-06 10:16:25.935650] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:19.987 [2024-12-06 10:16:25.935658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:19.987 [2024-12-06 10:16:25.944544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:19.987 [2024-12-06 10:16:25.944563] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:19.987 [2024-12-06 10:16:25.951465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:19.987 [2024-12-06 10:16:25.951575] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:19.987 [2024-12-06 10:16:25.968474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:19.987 1 00:17:19.987 10:16:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.987 10:16:25 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:20.926 10:16:26 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:20.926 10:16:26 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74251 00:17:20.926 10:16:26 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:20.926 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:20.926 fio-3.35 00:17:20.926 Starting 1 process 00:17:26.206 10:16:31 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74216 00:17:26.206 10:16:31 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:31.466 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74216 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:31.466 10:16:36 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74356 00:17:31.466 10:16:36 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:31.466 10:16:36 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:31.466 10:16:36 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74356 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74356 ']' 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.466 10:16:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:31.466 [2024-12-06 10:16:37.067230] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:17:31.466 [2024-12-06 10:16:37.067350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74356 ] 00:17:31.466 [2024-12-06 10:16:37.228804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:31.466 [2024-12-06 10:16:37.323169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.466 [2024-12-06 10:16:37.323180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:32.037 10:16:37 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.037 [2024-12-06 10:16:37.915471] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:32.037 [2024-12-06 10:16:37.917308] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.037 10:16:37 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.037 10:16:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.037 malloc0 00:17:32.037 10:16:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.037 10:16:38 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:32.037 10:16:38 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.037 10:16:38 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:32.037 [2024-12-06 10:16:38.019584] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:32.037 [2024-12-06 10:16:38.019617] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:32.037 [2024-12-06 10:16:38.019627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:32.037 [2024-12-06 10:16:38.027506] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:32.037 [2024-12-06 10:16:38.027529] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.037 1 00:17:32.037 10:16:38 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.037 10:16:38 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74251 00:17:32.978 [2024-12-06 10:16:39.027562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:32.978 [2024-12-06 10:16:39.035467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:32.978 [2024-12-06 10:16:39.035484] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:33.917 [2024-12-06 10:16:40.035512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:33.917 [2024-12-06 10:16:40.043467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:33.917 [2024-12-06 10:16:40.043504] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:35.300 [2024-12-06 10:16:41.043525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:35.300 [2024-12-06 10:16:41.047474] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:35.300 [2024-12-06 10:16:41.047498] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:35.300 [2024-12-06 10:16:41.047507] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:35.300 [2024-12-06 10:16:41.047576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:57.245 [2024-12-06 10:17:02.297468] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:57.245 [2024-12-06 10:17:02.303961] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:57.245 [2024-12-06 10:17:02.311627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:57.245 [2024-12-06 10:17:02.311645] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:23.906 00:18:23.906 fio_test: (groupid=0, jobs=1): err= 0: pid=74254: Fri Dec 6 10:17:27 2024 00:18:23.906 read: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(3432MiB/60002msec) 00:18:23.906 slat (nsec): min=929, max=941252, avg=4881.51, stdev=1904.85 00:18:23.906 clat (usec): min=1010, max=30338k, avg=3882.02, stdev=231112.50 00:18:23.906 lat (usec): min=1020, max=30338k, avg=3886.90, stdev=231112.50 00:18:23.906 clat percentiles (usec): 00:18:23.906 | 1.00th=[ 1680], 5.00th=[ 1778], 10.00th=[ 1811], 20.00th=[ 1844], 00:18:23.906 | 30.00th=[ 1860], 40.00th=[ 1876], 50.00th=[ 1893], 60.00th=[ 1926], 00:18:23.906 | 70.00th=[ 1991], 80.00th=[ 2343], 90.00th=[ 2442], 95.00th=[ 3130], 00:18:23.906 | 99.00th=[ 5276], 99.50th=[ 5669], 99.90th=[ 7504], 99.95th=[11207], 00:18:23.906 | 99.99th=[12911] 00:18:23.906 bw ( KiB/s): min=48024, max=130736, per=100.00%, avg=117062.24, stdev=17333.53, samples=59 00:18:23.906 iops : min=12006, max=32684, avg=29265.56, stdev=4333.38, samples=59 00:18:23.906 write: IOPS=14.6k, BW=57.1MiB/s (59.9MB/s)(3426MiB/60002msec); 0 zone resets 00:18:23.906 slat (nsec): min=889, max=581621, avg=4908.59, stdev=1789.69 00:18:23.906 clat (usec): min=1058, max=30338k, avg=4856.55, stdev=284200.36 00:18:23.906 lat (usec): min=1062, max=30338k, avg=4861.46, stdev=284200.36 00:18:23.906 clat percentiles (usec): 00:18:23.906 | 1.00th=[ 1713], 5.00th=[ 1860], 10.00th=[ 1893], 20.00th=[ 1926], 00:18:23.906 | 30.00th=[ 1942], 40.00th=[ 1958], 50.00th=[ 1991], 60.00th=[ 2008], 00:18:23.906 | 70.00th=[ 2073], 80.00th=[ 2442], 90.00th=[ 2507], 95.00th=[ 3032], 00:18:23.906 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7439], 99.95th=[11338], 00:18:23.906 | 99.99th=[13042] 00:18:23.906 bw ( KiB/s): min=47536, max=130688, per=100.00%, avg=116904.41, stdev=17527.81, samples=59 00:18:23.906 iops : min=11884, max=32672, avg=29226.10, stdev=4381.95, samples=59 00:18:23.906 lat (msec) : 2=63.45%, 4=33.44%, 10=3.06%, 20=0.05%, >=2000=0.01% 00:18:23.906 cpu : usr=3.25%, sys=14.55%, ctx=59238, majf=0, minf=13 00:18:23.906 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:23.906 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:23.906 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:23.906 issued rwts: total=878571,877177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:23.906 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:23.906 00:18:23.906 Run status group 0 (all jobs): 00:18:23.906 READ: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=3432MiB (3599MB), run=60002-60002msec 00:18:23.906 WRITE: bw=57.1MiB/s (59.9MB/s), 57.1MiB/s-57.1MiB/s (59.9MB/s-59.9MB/s), io=3426MiB (3593MB), run=60002-60002msec 00:18:23.906 00:18:23.906 Disk stats (read/write): 00:18:23.906 ublkb1: ios=875100/873694, merge=0/0, ticks=3361026/4140283, in_queue=7501309, util=99.89% 00:18:23.906 10:17:27 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.906 [2024-12-06 10:17:27.223088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:23.906 [2024-12-06 10:17:27.269487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:23.906 [2024-12-06 10:17:27.269625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:23.906 [2024-12-06 10:17:27.277467] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:23.906 [2024-12-06 10:17:27.277552] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:23.906 [2024-12-06 10:17:27.277559] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.906 10:17:27 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.906 [2024-12-06 10:17:27.292540] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:23.906 [2024-12-06 10:17:27.296182] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:23.906 [2024-12-06 10:17:27.296213] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.906 10:17:27 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:23.906 10:17:27 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:23.906 10:17:27 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74356 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74356 ']' 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74356 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74356 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.906 killing process with pid 74356 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74356' 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74356 00:18:23.906 10:17:27 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74356 00:18:23.906 [2024-12-06 10:17:28.424669] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:23.906 [2024-12-06 10:17:28.424715] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:23.906 00:18:23.906 real 1m4.414s 00:18:23.906 user 1m48.258s 00:18:23.906 sys 0m20.545s 00:18:23.906 10:17:29 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.906 10:17:29 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:23.906 ************************************ 00:18:23.906 END TEST ublk_recovery 00:18:23.906 ************************************ 00:18:23.906 10:17:29 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:23.906 10:17:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:23.906 10:17:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:23.906 10:17:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:23.906 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:18:23.907 10:17:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:23.907 10:17:29 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:23.907 10:17:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:23.907 10:17:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.907 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:18:23.907 ************************************ 00:18:23.907 START TEST ftl 00:18:23.907 ************************************ 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:23.907 * Looking for test storage... 00:18:23.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.907 10:17:29 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.907 10:17:29 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.907 10:17:29 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.907 10:17:29 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.907 10:17:29 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.907 10:17:29 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:23.907 10:17:29 ftl -- scripts/common.sh@345 -- # : 1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.907 10:17:29 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.907 10:17:29 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@353 -- # local d=1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.907 10:17:29 ftl -- scripts/common.sh@355 -- # echo 1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.907 10:17:29 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@353 -- # local d=2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.907 10:17:29 ftl -- scripts/common.sh@355 -- # echo 2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.907 10:17:29 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.907 10:17:29 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.907 10:17:29 ftl -- scripts/common.sh@368 -- # return 0 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:23.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.907 --rc genhtml_branch_coverage=1 00:18:23.907 --rc genhtml_function_coverage=1 00:18:23.907 --rc genhtml_legend=1 00:18:23.907 --rc geninfo_all_blocks=1 00:18:23.907 --rc geninfo_unexecuted_blocks=1 00:18:23.907 00:18:23.907 ' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:23.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.907 --rc genhtml_branch_coverage=1 00:18:23.907 --rc genhtml_function_coverage=1 00:18:23.907 --rc genhtml_legend=1 00:18:23.907 --rc geninfo_all_blocks=1 00:18:23.907 --rc geninfo_unexecuted_blocks=1 00:18:23.907 00:18:23.907 ' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:23.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.907 --rc genhtml_branch_coverage=1 00:18:23.907 --rc genhtml_function_coverage=1 00:18:23.907 --rc genhtml_legend=1 00:18:23.907 --rc geninfo_all_blocks=1 00:18:23.907 --rc geninfo_unexecuted_blocks=1 00:18:23.907 00:18:23.907 ' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:23.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.907 --rc genhtml_branch_coverage=1 00:18:23.907 --rc genhtml_function_coverage=1 00:18:23.907 --rc genhtml_legend=1 00:18:23.907 --rc geninfo_all_blocks=1 00:18:23.907 --rc geninfo_unexecuted_blocks=1 00:18:23.907 00:18:23.907 ' 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:23.907 10:17:29 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:23.907 10:17:29 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.907 10:17:29 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.907 10:17:29 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:23.907 10:17:29 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:23.907 10:17:29 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.907 10:17:29 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.907 10:17:29 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.907 10:17:29 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:23.907 10:17:29 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:23.907 10:17:29 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:23.907 10:17:29 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:23.907 10:17:29 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.907 10:17:29 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:23.907 10:17:29 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:23.907 10:17:29 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:23.907 10:17:29 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:23.907 10:17:29 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:23.907 10:17:29 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:23.907 10:17:29 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:23.907 10:17:29 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:23.907 10:17:29 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:23.907 10:17:29 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:23.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.907 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.907 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.907 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.907 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75160 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75160 00:18:23.907 10:17:29 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@835 -- # '[' -z 75160 ']' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.907 10:17:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:23.907 [2024-12-06 10:17:30.027019] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:18:23.907 [2024-12-06 10:17:30.027142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75160 ] 00:18:24.167 [2024-12-06 10:17:30.182487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.167 [2024-12-06 10:17:30.258654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.733 10:17:30 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.733 10:17:30 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:24.733 10:17:30 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:24.992 10:17:31 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:25.558 10:17:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:25.558 10:17:31 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:26.124 10:17:32 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:26.124 10:17:32 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:26.124 10:17:32 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@50 -- # break 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:26.382 10:17:32 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:26.641 10:17:32 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:26.641 10:17:32 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:26.641 10:17:32 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:26.641 10:17:32 ftl -- ftl/ftl.sh@63 -- # break 00:18:26.641 10:17:32 ftl -- ftl/ftl.sh@66 -- # killprocess 75160 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@954 -- # '[' -z 75160 ']' 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@958 -- # kill -0 75160 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@959 -- # uname 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75160 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:26.641 killing process with pid 75160 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75160' 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@973 -- # kill 75160 00:18:26.641 10:17:32 ftl -- common/autotest_common.sh@978 -- # wait 75160 00:18:27.574 10:17:33 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:27.574 10:17:33 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:27.574 10:17:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:27.574 10:17:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.574 10:17:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:27.574 ************************************ 00:18:27.574 START TEST ftl_fio_basic 00:18:27.574 ************************************ 00:18:27.574 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:27.832 * Looking for test storage... 00:18:27.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:27.832 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.833 --rc genhtml_branch_coverage=1 00:18:27.833 --rc genhtml_function_coverage=1 00:18:27.833 --rc genhtml_legend=1 00:18:27.833 --rc geninfo_all_blocks=1 00:18:27.833 --rc geninfo_unexecuted_blocks=1 00:18:27.833 00:18:27.833 ' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.833 --rc genhtml_branch_coverage=1 00:18:27.833 --rc genhtml_function_coverage=1 00:18:27.833 --rc genhtml_legend=1 00:18:27.833 --rc geninfo_all_blocks=1 00:18:27.833 --rc geninfo_unexecuted_blocks=1 00:18:27.833 00:18:27.833 ' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.833 --rc genhtml_branch_coverage=1 00:18:27.833 --rc genhtml_function_coverage=1 00:18:27.833 --rc genhtml_legend=1 00:18:27.833 --rc geninfo_all_blocks=1 00:18:27.833 --rc geninfo_unexecuted_blocks=1 00:18:27.833 00:18:27.833 ' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:27.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:27.833 --rc genhtml_branch_coverage=1 00:18:27.833 --rc genhtml_function_coverage=1 00:18:27.833 --rc genhtml_legend=1 00:18:27.833 --rc geninfo_all_blocks=1 00:18:27.833 --rc geninfo_unexecuted_blocks=1 00:18:27.833 00:18:27.833 ' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75288 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75288 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75288 ']' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.833 10:17:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:27.833 [2024-12-06 10:17:33.960773] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:18:27.833 [2024-12-06 10:17:33.960892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75288 ] 00:18:28.091 [2024-12-06 10:17:34.114696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:28.091 [2024-12-06 10:17:34.193666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.091 [2024-12-06 10:17:34.193791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.091 [2024-12-06 10:17:34.193794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:28.656 10:17:34 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.913 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:29.171 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:29.171 { 00:18:29.171 "name": "nvme0n1", 00:18:29.171 "aliases": [ 00:18:29.171 "d2710e8e-632a-4d8a-9a02-68bf7f4a58e3" 00:18:29.171 ], 00:18:29.171 "product_name": "NVMe disk", 00:18:29.171 "block_size": 4096, 00:18:29.171 "num_blocks": 1310720, 00:18:29.171 "uuid": "d2710e8e-632a-4d8a-9a02-68bf7f4a58e3", 00:18:29.171 "numa_id": -1, 00:18:29.171 "assigned_rate_limits": { 00:18:29.171 "rw_ios_per_sec": 0, 00:18:29.171 "rw_mbytes_per_sec": 0, 00:18:29.171 "r_mbytes_per_sec": 0, 00:18:29.171 "w_mbytes_per_sec": 0 00:18:29.171 }, 00:18:29.171 "claimed": false, 00:18:29.171 "zoned": false, 00:18:29.171 "supported_io_types": { 00:18:29.171 "read": true, 00:18:29.171 "write": true, 00:18:29.171 "unmap": true, 00:18:29.171 "flush": true, 00:18:29.171 "reset": true, 00:18:29.171 "nvme_admin": true, 00:18:29.171 "nvme_io": true, 00:18:29.171 "nvme_io_md": false, 00:18:29.171 "write_zeroes": true, 00:18:29.171 "zcopy": false, 00:18:29.171 "get_zone_info": false, 00:18:29.171 "zone_management": false, 00:18:29.171 "zone_append": false, 00:18:29.171 "compare": true, 00:18:29.171 "compare_and_write": false, 00:18:29.171 "abort": true, 00:18:29.171 "seek_hole": false, 00:18:29.171 "seek_data": false, 00:18:29.171 "copy": true, 00:18:29.171 "nvme_iov_md": false 00:18:29.171 }, 00:18:29.171 "driver_specific": { 00:18:29.171 "nvme": [ 00:18:29.171 { 00:18:29.171 "pci_address": "0000:00:11.0", 00:18:29.171 "trid": { 00:18:29.171 "trtype": "PCIe", 00:18:29.171 "traddr": "0000:00:11.0" 00:18:29.171 }, 00:18:29.171 "ctrlr_data": { 00:18:29.171 "cntlid": 0, 00:18:29.171 "vendor_id": "0x1b36", 00:18:29.171 "model_number": "QEMU NVMe Ctrl", 00:18:29.171 "serial_number": "12341", 00:18:29.171 "firmware_revision": "8.0.0", 00:18:29.171 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:29.171 "oacs": { 00:18:29.171 "security": 0, 00:18:29.171 "format": 1, 00:18:29.171 "firmware": 0, 00:18:29.171 "ns_manage": 1 00:18:29.171 }, 00:18:29.171 "multi_ctrlr": false, 00:18:29.171 "ana_reporting": false 00:18:29.171 }, 00:18:29.171 "vs": { 00:18:29.171 "nvme_version": "1.4" 00:18:29.171 }, 00:18:29.171 "ns_data": { 00:18:29.171 "id": 1, 00:18:29.171 "can_share": false 00:18:29.171 } 00:18:29.171 } 00:18:29.171 ], 00:18:29.171 "mp_policy": "active_passive" 00:18:29.171 } 00:18:29.171 } 00:18:29.172 ]' 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:29.172 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:29.428 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:29.428 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:29.685 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=b05d77c5-e65b-4708-8080-abf59b533ec2 00:18:29.685 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b05d77c5-e65b-4708-8080-abf59b533ec2 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:29.942 10:17:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.199 { 00:18:30.199 "name": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.199 "aliases": [ 00:18:30.199 "lvs/nvme0n1p0" 00:18:30.199 ], 00:18:30.199 "product_name": "Logical Volume", 00:18:30.199 "block_size": 4096, 00:18:30.199 "num_blocks": 26476544, 00:18:30.199 "uuid": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.199 "assigned_rate_limits": { 00:18:30.199 "rw_ios_per_sec": 0, 00:18:30.199 "rw_mbytes_per_sec": 0, 00:18:30.199 "r_mbytes_per_sec": 0, 00:18:30.199 "w_mbytes_per_sec": 0 00:18:30.199 }, 00:18:30.199 "claimed": false, 00:18:30.199 "zoned": false, 00:18:30.199 "supported_io_types": { 00:18:30.199 "read": true, 00:18:30.199 "write": true, 00:18:30.199 "unmap": true, 00:18:30.199 "flush": false, 00:18:30.199 "reset": true, 00:18:30.199 "nvme_admin": false, 00:18:30.199 "nvme_io": false, 00:18:30.199 "nvme_io_md": false, 00:18:30.199 "write_zeroes": true, 00:18:30.199 "zcopy": false, 00:18:30.199 "get_zone_info": false, 00:18:30.199 "zone_management": false, 00:18:30.199 "zone_append": false, 00:18:30.199 "compare": false, 00:18:30.199 "compare_and_write": false, 00:18:30.199 "abort": false, 00:18:30.199 "seek_hole": true, 00:18:30.199 "seek_data": true, 00:18:30.199 "copy": false, 00:18:30.199 "nvme_iov_md": false 00:18:30.199 }, 00:18:30.199 "driver_specific": { 00:18:30.199 "lvol": { 00:18:30.199 "lvol_store_uuid": "b05d77c5-e65b-4708-8080-abf59b533ec2", 00:18:30.199 "base_bdev": "nvme0n1", 00:18:30.199 "thin_provision": true, 00:18:30.199 "num_allocated_clusters": 0, 00:18:30.199 "snapshot": false, 00:18:30.199 "clone": false, 00:18:30.199 "esnap_clone": false 00:18:30.199 } 00:18:30.199 } 00:18:30.199 } 00:18:30.199 ]' 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:30.199 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:30.457 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.713 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.713 { 00:18:30.713 "name": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.713 "aliases": [ 00:18:30.713 "lvs/nvme0n1p0" 00:18:30.713 ], 00:18:30.713 "product_name": "Logical Volume", 00:18:30.713 "block_size": 4096, 00:18:30.714 "num_blocks": 26476544, 00:18:30.714 "uuid": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.714 "assigned_rate_limits": { 00:18:30.714 "rw_ios_per_sec": 0, 00:18:30.714 "rw_mbytes_per_sec": 0, 00:18:30.714 "r_mbytes_per_sec": 0, 00:18:30.714 "w_mbytes_per_sec": 0 00:18:30.714 }, 00:18:30.714 "claimed": false, 00:18:30.714 "zoned": false, 00:18:30.714 "supported_io_types": { 00:18:30.714 "read": true, 00:18:30.714 "write": true, 00:18:30.714 "unmap": true, 00:18:30.714 "flush": false, 00:18:30.714 "reset": true, 00:18:30.714 "nvme_admin": false, 00:18:30.714 "nvme_io": false, 00:18:30.714 "nvme_io_md": false, 00:18:30.714 "write_zeroes": true, 00:18:30.714 "zcopy": false, 00:18:30.714 "get_zone_info": false, 00:18:30.714 "zone_management": false, 00:18:30.714 "zone_append": false, 00:18:30.714 "compare": false, 00:18:30.714 "compare_and_write": false, 00:18:30.714 "abort": false, 00:18:30.714 "seek_hole": true, 00:18:30.714 "seek_data": true, 00:18:30.714 "copy": false, 00:18:30.714 "nvme_iov_md": false 00:18:30.714 }, 00:18:30.714 "driver_specific": { 00:18:30.714 "lvol": { 00:18:30.714 "lvol_store_uuid": "b05d77c5-e65b-4708-8080-abf59b533ec2", 00:18:30.714 "base_bdev": "nvme0n1", 00:18:30.714 "thin_provision": true, 00:18:30.714 "num_allocated_clusters": 0, 00:18:30.714 "snapshot": false, 00:18:30.714 "clone": false, 00:18:30.714 "esnap_clone": false 00:18:30.714 } 00:18:30.714 } 00:18:30.714 } 00:18:30.714 ]' 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:30.714 10:17:36 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:30.969 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:30.969 10:17:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7048620e-abf5-48e0-9f6a-b14e5f78a210 00:18:30.969 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:30.969 { 00:18:30.969 "name": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.969 "aliases": [ 00:18:30.969 "lvs/nvme0n1p0" 00:18:30.969 ], 00:18:30.969 "product_name": "Logical Volume", 00:18:30.969 "block_size": 4096, 00:18:30.969 "num_blocks": 26476544, 00:18:30.969 "uuid": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:30.969 "assigned_rate_limits": { 00:18:30.969 "rw_ios_per_sec": 0, 00:18:30.969 "rw_mbytes_per_sec": 0, 00:18:30.969 "r_mbytes_per_sec": 0, 00:18:30.969 "w_mbytes_per_sec": 0 00:18:30.969 }, 00:18:30.969 "claimed": false, 00:18:30.969 "zoned": false, 00:18:30.969 "supported_io_types": { 00:18:30.969 "read": true, 00:18:30.969 "write": true, 00:18:30.969 "unmap": true, 00:18:30.969 "flush": false, 00:18:30.969 "reset": true, 00:18:30.969 "nvme_admin": false, 00:18:30.969 "nvme_io": false, 00:18:30.969 "nvme_io_md": false, 00:18:30.969 "write_zeroes": true, 00:18:30.969 "zcopy": false, 00:18:30.969 "get_zone_info": false, 00:18:30.969 "zone_management": false, 00:18:30.969 "zone_append": false, 00:18:30.969 "compare": false, 00:18:30.969 "compare_and_write": false, 00:18:30.969 "abort": false, 00:18:30.969 "seek_hole": true, 00:18:30.969 "seek_data": true, 00:18:30.969 "copy": false, 00:18:30.969 "nvme_iov_md": false 00:18:30.969 }, 00:18:30.969 "driver_specific": { 00:18:30.969 "lvol": { 00:18:30.969 "lvol_store_uuid": "b05d77c5-e65b-4708-8080-abf59b533ec2", 00:18:30.969 "base_bdev": "nvme0n1", 00:18:30.969 "thin_provision": true, 00:18:30.969 "num_allocated_clusters": 0, 00:18:30.969 "snapshot": false, 00:18:30.969 "clone": false, 00:18:30.969 "esnap_clone": false 00:18:30.969 } 00:18:30.969 } 00:18:30.969 } 00:18:30.969 ]' 00:18:30.969 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:31.228 10:17:37 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7048620e-abf5-48e0-9f6a-b14e5f78a210 -c nvc0n1p0 --l2p_dram_limit 60 00:18:31.228 [2024-12-06 10:17:37.354628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.228 [2024-12-06 10:17:37.354670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:31.228 [2024-12-06 10:17:37.354683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:31.228 [2024-12-06 10:17:37.354690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.228 [2024-12-06 10:17:37.354738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.228 [2024-12-06 10:17:37.354746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.228 [2024-12-06 10:17:37.354755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:31.228 [2024-12-06 10:17:37.354761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.228 [2024-12-06 10:17:37.354793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:31.228 [2024-12-06 10:17:37.355358] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:31.228 [2024-12-06 10:17:37.355378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.355384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.229 [2024-12-06 10:17:37.355391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:18:31.229 [2024-12-06 10:17:37.355397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.355429] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a8eed8d-c2c6-4a4b-8d58-a06178154f26 00:18:31.229 [2024-12-06 10:17:37.356387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.356415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:31.229 [2024-12-06 10:17:37.356424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:31.229 [2024-12-06 10:17:37.356431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.361077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.361105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.229 [2024-12-06 10:17:37.361113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.567 ms 00:18:31.229 [2024-12-06 10:17:37.361123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.361203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.361212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.229 [2024-12-06 10:17:37.361218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:31.229 [2024-12-06 10:17:37.361228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.361260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.361268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:31.229 [2024-12-06 10:17:37.361274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:31.229 [2024-12-06 10:17:37.361281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.361302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.229 [2024-12-06 10:17:37.364113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.364139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.229 [2024-12-06 10:17:37.364150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.813 ms 00:18:31.229 [2024-12-06 10:17:37.364156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.364191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.364198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:31.229 [2024-12-06 10:17:37.364205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:31.229 [2024-12-06 10:17:37.364211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.364233] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:31.229 [2024-12-06 10:17:37.364353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:31.229 [2024-12-06 10:17:37.364365] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:31.229 [2024-12-06 10:17:37.364374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:31.229 [2024-12-06 10:17:37.364383] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364390] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364398] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:31.229 [2024-12-06 10:17:37.364404] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:31.229 [2024-12-06 10:17:37.364410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:31.229 [2024-12-06 10:17:37.364415] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:31.229 [2024-12-06 10:17:37.364424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.364432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:31.229 [2024-12-06 10:17:37.364439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:18:31.229 [2024-12-06 10:17:37.364453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.364527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.364533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:31.229 [2024-12-06 10:17:37.364540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:31.229 [2024-12-06 10:17:37.364548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.364631] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:31.229 [2024-12-06 10:17:37.364641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:31.229 [2024-12-06 10:17:37.364648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:31.229 [2024-12-06 10:17:37.364666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:31.229 [2024-12-06 10:17:37.364685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.229 [2024-12-06 10:17:37.364697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:31.229 [2024-12-06 10:17:37.364702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:31.229 [2024-12-06 10:17:37.364708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:31.229 [2024-12-06 10:17:37.364713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:31.229 [2024-12-06 10:17:37.364719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:31.229 [2024-12-06 10:17:37.364724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:31.229 [2024-12-06 10:17:37.364737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:31.229 [2024-12-06 10:17:37.364755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:31.229 [2024-12-06 10:17:37.364772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:31.229 [2024-12-06 10:17:37.364790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:31.229 [2024-12-06 10:17:37.364806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:31.229 [2024-12-06 10:17:37.364825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.229 [2024-12-06 10:17:37.364847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:31.229 [2024-12-06 10:17:37.364852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:31.229 [2024-12-06 10:17:37.364858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:31.229 [2024-12-06 10:17:37.364863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:31.229 [2024-12-06 10:17:37.364869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:31.229 [2024-12-06 10:17:37.364874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:31.229 [2024-12-06 10:17:37.364885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:31.229 [2024-12-06 10:17:37.364891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364895] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:31.229 [2024-12-06 10:17:37.364902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:31.229 [2024-12-06 10:17:37.364907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:31.229 [2024-12-06 10:17:37.364919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:31.229 [2024-12-06 10:17:37.364927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:31.229 [2024-12-06 10:17:37.364932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:31.229 [2024-12-06 10:17:37.364939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:31.229 [2024-12-06 10:17:37.364944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:31.229 [2024-12-06 10:17:37.364951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:31.229 [2024-12-06 10:17:37.364957] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:31.229 [2024-12-06 10:17:37.364966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.364972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:31.229 [2024-12-06 10:17:37.364979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:31.229 [2024-12-06 10:17:37.364986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:31.229 [2024-12-06 10:17:37.364992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:31.229 [2024-12-06 10:17:37.364998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:31.229 [2024-12-06 10:17:37.365005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:31.229 [2024-12-06 10:17:37.365011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:31.229 [2024-12-06 10:17:37.365017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:31.229 [2024-12-06 10:17:37.365023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:31.229 [2024-12-06 10:17:37.365030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:31.229 [2024-12-06 10:17:37.365060] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:31.229 [2024-12-06 10:17:37.365069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:31.229 [2024-12-06 10:17:37.365082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:31.229 [2024-12-06 10:17:37.365088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:31.229 [2024-12-06 10:17:37.365094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:31.229 [2024-12-06 10:17:37.365101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.229 [2024-12-06 10:17:37.365107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:31.229 [2024-12-06 10:17:37.365113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:18:31.229 [2024-12-06 10:17:37.365120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.229 [2024-12-06 10:17:37.365162] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:31.229 [2024-12-06 10:17:37.365172] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:34.513 [2024-12-06 10:17:40.375056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.375121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:34.513 [2024-12-06 10:17:40.375135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3009.882 ms 00:18:34.513 [2024-12-06 10:17:40.375144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.400084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.400130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:34.513 [2024-12-06 10:17:40.400141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.734 ms 00:18:34.513 [2024-12-06 10:17:40.400151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.400274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.400286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:34.513 [2024-12-06 10:17:40.400294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:34.513 [2024-12-06 10:17:40.400305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.442232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.442281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:34.513 [2024-12-06 10:17:40.442294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.888 ms 00:18:34.513 [2024-12-06 10:17:40.442303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.442348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.442359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:34.513 [2024-12-06 10:17:40.442367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:34.513 [2024-12-06 10:17:40.442376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.442747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.442768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:34.513 [2024-12-06 10:17:40.442779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:34.513 [2024-12-06 10:17:40.442788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.442918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.442929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:34.513 [2024-12-06 10:17:40.442937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:34.513 [2024-12-06 10:17:40.442947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.457012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.457140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:34.513 [2024-12-06 10:17:40.457156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.041 ms 00:18:34.513 [2024-12-06 10:17:40.457165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.468505] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:34.513 [2024-12-06 10:17:40.482239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.482271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:34.513 [2024-12-06 10:17:40.482286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.979 ms 00:18:34.513 [2024-12-06 10:17:40.482293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.539597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.539740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:34.513 [2024-12-06 10:17:40.539763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.271 ms 00:18:34.513 [2024-12-06 10:17:40.539771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.539962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.539973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:34.513 [2024-12-06 10:17:40.539986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:18:34.513 [2024-12-06 10:17:40.539993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.563082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.563197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:34.513 [2024-12-06 10:17:40.563216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.016 ms 00:18:34.513 [2024-12-06 10:17:40.563224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.585516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.585630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:34.513 [2024-12-06 10:17:40.585649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.251 ms 00:18:34.513 [2024-12-06 10:17:40.585656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.586210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.586220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:34.513 [2024-12-06 10:17:40.586230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:34.513 [2024-12-06 10:17:40.586238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.513 [2024-12-06 10:17:40.663479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.513 [2024-12-06 10:17:40.663522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:34.513 [2024-12-06 10:17:40.663541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.202 ms 00:18:34.513 [2024-12-06 10:17:40.663550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.772 [2024-12-06 10:17:40.768936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.772 [2024-12-06 10:17:40.769083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:34.772 [2024-12-06 10:17:40.769104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.295 ms 00:18:34.773 [2024-12-06 10:17:40.769113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.773 [2024-12-06 10:17:40.792236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.773 [2024-12-06 10:17:40.792355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:34.773 [2024-12-06 10:17:40.792374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.082 ms 00:18:34.773 [2024-12-06 10:17:40.792382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.773 [2024-12-06 10:17:40.815590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.773 [2024-12-06 10:17:40.815621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:34.773 [2024-12-06 10:17:40.815634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.171 ms 00:18:34.773 [2024-12-06 10:17:40.815641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.773 [2024-12-06 10:17:40.815687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.773 [2024-12-06 10:17:40.815696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:34.773 [2024-12-06 10:17:40.815711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.773 [2024-12-06 10:17:40.815718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.773 [2024-12-06 10:17:40.815794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.773 [2024-12-06 10:17:40.815804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:34.773 [2024-12-06 10:17:40.815814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:34.773 [2024-12-06 10:17:40.815821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.773 [2024-12-06 10:17:40.816745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3461.694 ms, result 0 00:18:34.773 { 00:18:34.773 "name": "ftl0", 00:18:34.773 "uuid": "0a8eed8d-c2c6-4a4b-8d58-a06178154f26" 00:18:34.773 } 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:34.773 10:17:40 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:35.031 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:35.290 [ 00:18:35.290 { 00:18:35.290 "name": "ftl0", 00:18:35.290 "aliases": [ 00:18:35.290 "0a8eed8d-c2c6-4a4b-8d58-a06178154f26" 00:18:35.290 ], 00:18:35.290 "product_name": "FTL disk", 00:18:35.290 "block_size": 4096, 00:18:35.290 "num_blocks": 20971520, 00:18:35.290 "uuid": "0a8eed8d-c2c6-4a4b-8d58-a06178154f26", 00:18:35.290 "assigned_rate_limits": { 00:18:35.291 "rw_ios_per_sec": 0, 00:18:35.291 "rw_mbytes_per_sec": 0, 00:18:35.291 "r_mbytes_per_sec": 0, 00:18:35.291 "w_mbytes_per_sec": 0 00:18:35.291 }, 00:18:35.291 "claimed": false, 00:18:35.291 "zoned": false, 00:18:35.291 "supported_io_types": { 00:18:35.291 "read": true, 00:18:35.291 "write": true, 00:18:35.291 "unmap": true, 00:18:35.291 "flush": true, 00:18:35.291 "reset": false, 00:18:35.291 "nvme_admin": false, 00:18:35.291 "nvme_io": false, 00:18:35.291 "nvme_io_md": false, 00:18:35.291 "write_zeroes": true, 00:18:35.291 "zcopy": false, 00:18:35.291 "get_zone_info": false, 00:18:35.291 "zone_management": false, 00:18:35.291 "zone_append": false, 00:18:35.291 "compare": false, 00:18:35.291 "compare_and_write": false, 00:18:35.291 "abort": false, 00:18:35.291 "seek_hole": false, 00:18:35.291 "seek_data": false, 00:18:35.291 "copy": false, 00:18:35.291 "nvme_iov_md": false 00:18:35.291 }, 00:18:35.291 "driver_specific": { 00:18:35.291 "ftl": { 00:18:35.291 "base_bdev": "7048620e-abf5-48e0-9f6a-b14e5f78a210", 00:18:35.291 "cache": "nvc0n1p0" 00:18:35.291 } 00:18:35.291 } 00:18:35.291 } 00:18:35.291 ] 00:18:35.291 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:35.291 10:17:41 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:35.291 10:17:41 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:35.291 10:17:41 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:35.291 10:17:41 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:35.550 [2024-12-06 10:17:41.621392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.550 [2024-12-06 10:17:41.621441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.550 [2024-12-06 10:17:41.621471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.550 [2024-12-06 10:17:41.621483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.550 [2024-12-06 10:17:41.621511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.550 [2024-12-06 10:17:41.624084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.550 [2024-12-06 10:17:41.624113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.550 [2024-12-06 10:17:41.624125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.554 ms 00:18:35.550 [2024-12-06 10:17:41.624134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.550 [2024-12-06 10:17:41.624543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.550 [2024-12-06 10:17:41.624566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.550 [2024-12-06 10:17:41.624578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:18:35.550 [2024-12-06 10:17:41.624585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.550 [2024-12-06 10:17:41.627813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.550 [2024-12-06 10:17:41.627833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.551 [2024-12-06 10:17:41.627845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:18:35.551 [2024-12-06 10:17:41.627853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.551 [2024-12-06 10:17:41.634042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.551 [2024-12-06 10:17:41.634167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.551 [2024-12-06 10:17:41.634185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.166 ms 00:18:35.551 [2024-12-06 10:17:41.634193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.551 [2024-12-06 10:17:41.657569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.551 [2024-12-06 10:17:41.657676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.551 [2024-12-06 10:17:41.657748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.294 ms 00:18:35.551 [2024-12-06 10:17:41.657771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.551 [2024-12-06 10:17:41.672919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.551 [2024-12-06 10:17:41.673033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.551 [2024-12-06 10:17:41.673098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.835 ms 00:18:35.551 [2024-12-06 10:17:41.673122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.551 [2024-12-06 10:17:41.673571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.551 [2024-12-06 10:17:41.673684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.551 [2024-12-06 10:17:41.673745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:18:35.551 [2024-12-06 10:17:41.673769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.551 [2024-12-06 10:17:41.697033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.551 [2024-12-06 10:17:41.697140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:35.551 [2024-12-06 10:17:41.697194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.224 ms 00:18:35.551 [2024-12-06 10:17:41.697215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.811 [2024-12-06 10:17:41.719726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.811 [2024-12-06 10:17:41.719823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:35.811 [2024-12-06 10:17:41.719872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.462 ms 00:18:35.811 [2024-12-06 10:17:41.719893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.811 [2024-12-06 10:17:41.742304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.811 [2024-12-06 10:17:41.742400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.811 [2024-12-06 10:17:41.742456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.363 ms 00:18:35.811 [2024-12-06 10:17:41.742479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.811 [2024-12-06 10:17:41.764676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.811 [2024-12-06 10:17:41.764770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.811 [2024-12-06 10:17:41.764818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.106 ms 00:18:35.811 [2024-12-06 10:17:41.764839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.811 [2024-12-06 10:17:41.764881] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.811 [2024-12-06 10:17:41.765263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.765990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.766989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.811 [2024-12-06 10:17:41.767610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.767991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.768009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.768016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.768027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.812 [2024-12-06 10:17:41.768042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.812 [2024-12-06 10:17:41.768052] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a8eed8d-c2c6-4a4b-8d58-a06178154f26 00:18:35.812 [2024-12-06 10:17:41.768059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.812 [2024-12-06 10:17:41.768070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.812 [2024-12-06 10:17:41.768078] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.812 [2024-12-06 10:17:41.768087] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.812 [2024-12-06 10:17:41.768094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.812 [2024-12-06 10:17:41.768103] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.812 [2024-12-06 10:17:41.768111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.812 [2024-12-06 10:17:41.768119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.812 [2024-12-06 10:17:41.768125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.812 [2024-12-06 10:17:41.768135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.812 [2024-12-06 10:17:41.768143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.812 [2024-12-06 10:17:41.768153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:18:35.812 [2024-12-06 10:17:41.768160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.780815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.812 [2024-12-06 10:17:41.780906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.812 [2024-12-06 10:17:41.780956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:18:35.812 [2024-12-06 10:17:41.780995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.781371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.812 [2024-12-06 10:17:41.781444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.812 [2024-12-06 10:17:41.781510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:18:35.812 [2024-12-06 10:17:41.781563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.824971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.812 [2024-12-06 10:17:41.825077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.812 [2024-12-06 10:17:41.825127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.812 [2024-12-06 10:17:41.825149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.825217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.812 [2024-12-06 10:17:41.825264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.812 [2024-12-06 10:17:41.825289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.812 [2024-12-06 10:17:41.825308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.825421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.812 [2024-12-06 10:17:41.825472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.812 [2024-12-06 10:17:41.825630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.812 [2024-12-06 10:17:41.825659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.825699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.812 [2024-12-06 10:17:41.825720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.812 [2024-12-06 10:17:41.825740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.812 [2024-12-06 10:17:41.825795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.812 [2024-12-06 10:17:41.905118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.812 [2024-12-06 10:17:41.905259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.813 [2024-12-06 10:17:41.905330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.905353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.967917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.813 [2024-12-06 10:17:41.968127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.968251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.813 [2024-12-06 10:17:41.968299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.968425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.813 [2024-12-06 10:17:41.968487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.968621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.813 [2024-12-06 10:17:41.968667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.968745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.813 [2024-12-06 10:17:41.968838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.968916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.968938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.813 [2024-12-06 10:17:41.968958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.968978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.969093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.813 [2024-12-06 10:17:41.969117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.813 [2024-12-06 10:17:41.969138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.813 [2024-12-06 10:17:41.969156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.813 [2024-12-06 10:17:41.969321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.902 ms, result 0 00:18:35.813 true 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75288 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75288 ']' 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75288 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.072 10:17:41 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75288 00:18:36.072 killing process with pid 75288 00:18:36.072 10:17:42 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.072 10:17:42 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.072 10:17:42 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75288' 00:18:36.072 10:17:42 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75288 00:18:36.072 10:17:42 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75288 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:44.198 10:17:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:44.198 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:44.198 fio-3.35 00:18:44.198 Starting 1 thread 00:18:47.492 00:18:47.492 test: (groupid=0, jobs=1): err= 0: pid=75479: Fri Dec 6 10:17:53 2024 00:18:47.492 read: IOPS=1223, BW=81.2MiB/s (85.2MB/s)(255MiB/3134msec) 00:18:47.493 slat (nsec): min=3071, max=24285, avg=4735.38, stdev=2178.79 00:18:47.493 clat (usec): min=242, max=1408, avg=368.66, stdev=116.07 00:18:47.493 lat (usec): min=247, max=1413, avg=373.40, stdev=116.63 00:18:47.493 clat percentiles (usec): 00:18:47.493 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 314], 00:18:47.493 | 30.00th=[ 318], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:18:47.493 | 70.00th=[ 334], 80.00th=[ 416], 90.00th=[ 510], 95.00th=[ 578], 00:18:47.493 | 99.00th=[ 963], 99.50th=[ 1020], 99.90th=[ 1205], 99.95th=[ 1270], 00:18:47.493 | 99.99th=[ 1401] 00:18:47.493 write: IOPS=1231, BW=81.8MiB/s (85.8MB/s)(256MiB/3131msec); 0 zone resets 00:18:47.493 slat (usec): min=13, max=124, avg=19.56, stdev= 3.87 00:18:47.493 clat (usec): min=292, max=2167, avg=408.65, stdev=167.42 00:18:47.493 lat (usec): min=311, max=2185, avg=428.21, stdev=167.88 00:18:47.493 clat percentiles (usec): 00:18:47.493 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 338], 00:18:47.493 | 30.00th=[ 343], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 351], 00:18:47.493 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 578], 95.00th=[ 660], 00:18:47.493 | 99.00th=[ 1221], 99.50th=[ 1516], 99.90th=[ 1926], 99.95th=[ 2147], 00:18:47.493 | 99.99th=[ 2180] 00:18:47.493 bw ( KiB/s): min=54808, max=94656, per=99.49%, avg=83322.67, stdev=17103.30, samples=6 00:18:47.493 iops : min= 806, max= 1392, avg=1225.33, stdev=251.52, samples=6 00:18:47.493 lat (usec) : 250=0.10%, 500=87.31%, 750=9.96%, 1000=1.40% 00:18:47.493 lat (msec) : 2=1.20%, 4=0.03% 00:18:47.493 cpu : usr=99.30%, sys=0.03%, ctx=6, majf=0, minf=1169 00:18:47.493 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:47.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.493 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.493 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:47.493 00:18:47.493 Run status group 0 (all jobs): 00:18:47.493 READ: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=255MiB (267MB), run=3134-3134msec 00:18:47.493 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=256MiB (269MB), run=3131-3131msec 00:18:49.390 ----------------------------------------------------- 00:18:49.390 Suppressions used: 00:18:49.390 count bytes template 00:18:49.390 1 5 /usr/src/fio/parse.c 00:18:49.390 1 8 libtcmalloc_minimal.so 00:18:49.390 1 904 libcrypto.so 00:18:49.390 ----------------------------------------------------- 00:18:49.390 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:49.390 10:17:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:49.646 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:49.646 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:49.646 fio-3.35 00:18:49.646 Starting 2 threads 00:19:16.223 00:19:16.223 first_half: (groupid=0, jobs=1): err= 0: pid=75578: Fri Dec 6 10:18:18 2024 00:19:16.223 read: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(255MiB/21825msec) 00:19:16.223 slat (nsec): min=3064, max=22682, avg=3745.47, stdev=582.91 00:19:16.223 clat (usec): min=595, max=274037, avg=33507.64, stdev=17026.73 00:19:16.223 lat (usec): min=599, max=274040, avg=33511.38, stdev=17026.73 00:19:16.223 clat percentiles (msec): 00:19:16.223 | 1.00th=[ 6], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:19:16.223 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:19:16.223 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 44], 00:19:16.223 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 180], 99.95th=[ 236], 00:19:16.223 | 99.99th=[ 266] 00:19:16.223 write: IOPS=3900, BW=15.2MiB/s (16.0MB/s)(256MiB/16801msec); 0 zone resets 00:19:16.223 slat (usec): min=3, max=520, avg= 5.31, stdev= 3.55 00:19:16.223 clat (usec): min=359, max=78835, avg=9232.17, stdev=15378.69 00:19:16.223 lat (usec): min=366, max=78839, avg=9237.48, stdev=15378.70 00:19:16.223 clat percentiles (usec): 00:19:16.223 | 1.00th=[ 644], 5.00th=[ 734], 10.00th=[ 865], 20.00th=[ 1188], 00:19:16.223 | 30.00th=[ 2540], 40.00th=[ 3752], 50.00th=[ 4883], 60.00th=[ 5407], 00:19:16.223 | 70.00th=[ 6063], 80.00th=[ 9765], 90.00th=[14222], 95.00th=[57934], 00:19:16.223 | 99.00th=[65799], 99.50th=[68682], 99.90th=[76022], 99.95th=[77071], 00:19:16.223 | 99.99th=[78119] 00:19:16.223 bw ( KiB/s): min= 3208, max=41224, per=82.74%, avg=24966.10, stdev=12778.91, samples=21 00:19:16.223 iops : min= 802, max=10306, avg=6241.52, stdev=3194.73, samples=21 00:19:16.223 lat (usec) : 500=0.02%, 750=2.90%, 1000=4.29% 00:19:16.223 lat (msec) : 2=6.11%, 4=7.93%, 10=19.87%, 20=5.82%, 50=47.09% 00:19:16.223 lat (msec) : 100=5.02%, 250=0.95%, 500=0.01% 00:19:16.223 cpu : usr=99.43%, sys=0.11%, ctx=136, majf=0, minf=5591 00:19:16.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:16.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.223 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.223 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.223 second_half: (groupid=0, jobs=1): err= 0: pid=75579: Fri Dec 6 10:18:18 2024 00:19:16.223 read: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(255MiB/21932msec) 00:19:16.223 slat (nsec): min=3097, max=20395, avg=3830.77, stdev=695.57 00:19:16.223 clat (usec): min=592, max=278631, avg=33257.61, stdev=17764.99 00:19:16.223 lat (usec): min=596, max=278636, avg=33261.44, stdev=17765.02 00:19:16.223 clat percentiles (msec): 00:19:16.223 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:19:16.223 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:19:16.223 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 45], 00:19:16.223 | 99.00th=[ 133], 99.50th=[ 146], 99.90th=[ 197], 99.95th=[ 228], 00:19:16.223 | 99.99th=[ 275] 00:19:16.223 write: IOPS=3771, BW=14.7MiB/s (15.4MB/s)(256MiB/17375msec); 0 zone resets 00:19:16.223 slat (usec): min=3, max=111, avg= 5.40, stdev= 2.15 00:19:16.223 clat (usec): min=383, max=78922, avg=9708.46, stdev=15979.73 00:19:16.223 lat (usec): min=391, max=78928, avg=9713.86, stdev=15979.76 00:19:16.223 clat percentiles (usec): 00:19:16.223 | 1.00th=[ 635], 5.00th=[ 725], 10.00th=[ 807], 20.00th=[ 1123], 00:19:16.223 | 30.00th=[ 2278], 40.00th=[ 3425], 50.00th=[ 4490], 60.00th=[ 5342], 00:19:16.223 | 70.00th=[ 5997], 80.00th=[10683], 90.00th=[24773], 95.00th=[58459], 00:19:16.223 | 99.00th=[66323], 99.50th=[68682], 99.90th=[72877], 99.95th=[76022], 00:19:16.223 | 99.99th=[78119] 00:19:16.223 bw ( KiB/s): min= 1408, max=42232, per=78.98%, avg=23831.27, stdev=12359.32, samples=22 00:19:16.223 iops : min= 352, max=10558, avg=5957.82, stdev=3089.83, samples=22 00:19:16.223 lat (usec) : 500=0.01%, 750=3.41%, 1000=4.98% 00:19:16.223 lat (msec) : 2=5.91%, 4=8.71%, 10=17.74%, 20=5.78%, 50=47.40% 00:19:16.223 lat (msec) : 100=5.08%, 250=0.98%, 500=0.01% 00:19:16.223 cpu : usr=99.31%, sys=0.09%, ctx=34, majf=0, minf=5528 00:19:16.223 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:16.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.223 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.223 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.223 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.223 00:19:16.223 Run status group 0 (all jobs): 00:19:16.223 READ: bw=23.2MiB/s (24.4MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=510MiB (534MB), run=21825-21932msec 00:19:16.223 WRITE: bw=29.5MiB/s (30.9MB/s), 14.7MiB/s-15.2MiB/s (15.4MB/s-16.0MB/s), io=512MiB (537MB), run=16801-17375msec 00:19:16.223 ----------------------------------------------------- 00:19:16.223 Suppressions used: 00:19:16.223 count bytes template 00:19:16.223 2 10 /usr/src/fio/parse.c 00:19:16.223 2 192 /usr/src/fio/iolog.c 00:19:16.223 1 8 libtcmalloc_minimal.so 00:19:16.223 1 904 libcrypto.so 00:19:16.223 ----------------------------------------------------- 00:19:16.223 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:16.223 10:18:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:16.223 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:16.223 fio-3.35 00:19:16.223 Starting 1 thread 00:19:34.384 00:19:34.384 test: (groupid=0, jobs=1): err= 0: pid=75875: Fri Dec 6 10:18:38 2024 00:19:34.384 read: IOPS=7956, BW=31.1MiB/s (32.6MB/s)(255MiB/8195msec) 00:19:34.384 slat (nsec): min=3093, max=28217, avg=3592.90, stdev=741.73 00:19:34.384 clat (usec): min=940, max=36913, avg=16079.59, stdev=2605.39 00:19:34.384 lat (usec): min=950, max=36917, avg=16083.18, stdev=2605.67 00:19:34.384 clat percentiles (usec): 00:19:34.384 | 1.00th=[14353], 5.00th=[14615], 10.00th=[14746], 20.00th=[14877], 00:19:34.384 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:19:34.384 | 70.00th=[15795], 80.00th=[16188], 90.00th=[17433], 95.00th=[20579], 00:19:34.384 | 99.00th=[31327], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:19:34.384 | 99.99th=[35390] 00:19:34.384 write: IOPS=8121, BW=31.7MiB/s (33.3MB/s)(256MiB/8069msec); 0 zone resets 00:19:34.384 slat (usec): min=4, max=833, avg= 7.22, stdev= 7.57 00:19:34.384 clat (usec): min=868, max=94988, avg=15677.52, stdev=19041.91 00:19:34.384 lat (usec): min=877, max=94996, avg=15684.74, stdev=19041.96 00:19:34.384 clat percentiles (usec): 00:19:34.384 | 1.00th=[ 1352], 5.00th=[ 1614], 10.00th=[ 1811], 20.00th=[ 2180], 00:19:34.384 | 30.00th=[ 2638], 40.00th=[ 3720], 50.00th=[ 9896], 60.00th=[12387], 00:19:34.384 | 70.00th=[14877], 80.00th=[17957], 90.00th=[54789], 95.00th=[60031], 00:19:34.384 | 99.00th=[67634], 99.50th=[69731], 99.90th=[74974], 99.95th=[79168], 00:19:34.384 | 99.99th=[88605] 00:19:34.384 bw ( KiB/s): min= 2264, max=44776, per=94.92%, avg=30836.65, stdev=9440.82, samples=17 00:19:34.384 iops : min= 566, max=11194, avg=7709.12, stdev=2360.20, samples=17 00:19:34.384 lat (usec) : 1000=0.01% 00:19:34.384 lat (msec) : 2=7.70%, 4=12.65%, 10=4.87%, 20=62.96%, 50=5.02% 00:19:34.384 lat (msec) : 100=6.78% 00:19:34.384 cpu : usr=99.03%, sys=0.16%, ctx=33, majf=0, minf=5565 00:19:34.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:34.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.384 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:34.384 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:34.384 00:19:34.384 Run status group 0 (all jobs): 00:19:34.384 READ: bw=31.1MiB/s (32.6MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.6MB/s), io=255MiB (267MB), run=8195-8195msec 00:19:34.384 WRITE: bw=31.7MiB/s (33.3MB/s), 31.7MiB/s-31.7MiB/s (33.3MB/s-33.3MB/s), io=256MiB (268MB), run=8069-8069msec 00:19:34.384 ----------------------------------------------------- 00:19:34.384 Suppressions used: 00:19:34.384 count bytes template 00:19:34.384 1 5 /usr/src/fio/parse.c 00:19:34.384 2 192 /usr/src/fio/iolog.c 00:19:34.384 1 8 libtcmalloc_minimal.so 00:19:34.384 1 904 libcrypto.so 00:19:34.384 ----------------------------------------------------- 00:19:34.384 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.384 Remove shared memory files 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57258 /dev/shm/spdk_tgt_trace.pid74216 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:34.384 ************************************ 00:19:34.384 END TEST ftl_fio_basic 00:19:34.384 ************************************ 00:19:34.384 00:19:34.384 real 1m6.355s 00:19:34.384 user 2m21.328s 00:19:34.384 sys 0m2.629s 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.384 10:18:40 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:34.384 10:18:40 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:34.384 10:18:40 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:34.384 10:18:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.384 10:18:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:34.384 ************************************ 00:19:34.384 START TEST ftl_bdevperf 00:19:34.384 ************************************ 00:19:34.384 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:34.384 * Looking for test storage... 00:19:34.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.385 --rc genhtml_branch_coverage=1 00:19:34.385 --rc genhtml_function_coverage=1 00:19:34.385 --rc genhtml_legend=1 00:19:34.385 --rc geninfo_all_blocks=1 00:19:34.385 --rc geninfo_unexecuted_blocks=1 00:19:34.385 00:19:34.385 ' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.385 --rc genhtml_branch_coverage=1 00:19:34.385 --rc genhtml_function_coverage=1 00:19:34.385 --rc genhtml_legend=1 00:19:34.385 --rc geninfo_all_blocks=1 00:19:34.385 --rc geninfo_unexecuted_blocks=1 00:19:34.385 00:19:34.385 ' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.385 --rc genhtml_branch_coverage=1 00:19:34.385 --rc genhtml_function_coverage=1 00:19:34.385 --rc genhtml_legend=1 00:19:34.385 --rc geninfo_all_blocks=1 00:19:34.385 --rc geninfo_unexecuted_blocks=1 00:19:34.385 00:19:34.385 ' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.385 --rc genhtml_branch_coverage=1 00:19:34.385 --rc genhtml_function_coverage=1 00:19:34.385 --rc genhtml_legend=1 00:19:34.385 --rc geninfo_all_blocks=1 00:19:34.385 --rc geninfo_unexecuted_blocks=1 00:19:34.385 00:19:34.385 ' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76136 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76136 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76136 ']' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.385 10:18:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:34.385 [2024-12-06 10:18:40.381652] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:34.385 [2024-12-06 10:18:40.381919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76136 ] 00:19:34.385 [2024-12-06 10:18:40.540663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.647 [2024-12-06 10:18:40.670471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:35.219 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:35.480 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:35.743 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.743 { 00:19:35.743 "name": "nvme0n1", 00:19:35.743 "aliases": [ 00:19:35.743 "7f87accf-6840-4b85-9ce7-e12707bee2a8" 00:19:35.743 ], 00:19:35.743 "product_name": "NVMe disk", 00:19:35.743 "block_size": 4096, 00:19:35.743 "num_blocks": 1310720, 00:19:35.743 "uuid": "7f87accf-6840-4b85-9ce7-e12707bee2a8", 00:19:35.743 "numa_id": -1, 00:19:35.743 "assigned_rate_limits": { 00:19:35.743 "rw_ios_per_sec": 0, 00:19:35.743 "rw_mbytes_per_sec": 0, 00:19:35.743 "r_mbytes_per_sec": 0, 00:19:35.743 "w_mbytes_per_sec": 0 00:19:35.743 }, 00:19:35.743 "claimed": true, 00:19:35.743 "claim_type": "read_many_write_one", 00:19:35.743 "zoned": false, 00:19:35.743 "supported_io_types": { 00:19:35.743 "read": true, 00:19:35.743 "write": true, 00:19:35.743 "unmap": true, 00:19:35.743 "flush": true, 00:19:35.743 "reset": true, 00:19:35.743 "nvme_admin": true, 00:19:35.743 "nvme_io": true, 00:19:35.743 "nvme_io_md": false, 00:19:35.743 "write_zeroes": true, 00:19:35.743 "zcopy": false, 00:19:35.743 "get_zone_info": false, 00:19:35.743 "zone_management": false, 00:19:35.743 "zone_append": false, 00:19:35.743 "compare": true, 00:19:35.743 "compare_and_write": false, 00:19:35.743 "abort": true, 00:19:35.743 "seek_hole": false, 00:19:35.743 "seek_data": false, 00:19:35.743 "copy": true, 00:19:35.743 "nvme_iov_md": false 00:19:35.743 }, 00:19:35.743 "driver_specific": { 00:19:35.743 "nvme": [ 00:19:35.743 { 00:19:35.743 "pci_address": "0000:00:11.0", 00:19:35.743 "trid": { 00:19:35.743 "trtype": "PCIe", 00:19:35.743 "traddr": "0000:00:11.0" 00:19:35.743 }, 00:19:35.743 "ctrlr_data": { 00:19:35.743 "cntlid": 0, 00:19:35.743 "vendor_id": "0x1b36", 00:19:35.743 "model_number": "QEMU NVMe Ctrl", 00:19:35.743 "serial_number": "12341", 00:19:35.743 "firmware_revision": "8.0.0", 00:19:35.743 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:35.743 "oacs": { 00:19:35.743 "security": 0, 00:19:35.743 "format": 1, 00:19:35.743 "firmware": 0, 00:19:35.743 "ns_manage": 1 00:19:35.743 }, 00:19:35.743 "multi_ctrlr": false, 00:19:35.743 "ana_reporting": false 00:19:35.743 }, 00:19:35.743 "vs": { 00:19:35.743 "nvme_version": "1.4" 00:19:35.743 }, 00:19:35.743 "ns_data": { 00:19:35.744 "id": 1, 00:19:35.744 "can_share": false 00:19:35.744 } 00:19:35.744 } 00:19:35.744 ], 00:19:35.744 "mp_policy": "active_passive" 00:19:35.744 } 00:19:35.744 } 00:19:35.744 ]' 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:35.744 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:36.006 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=b05d77c5-e65b-4708-8080-abf59b533ec2 00:19:36.006 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:36.006 10:18:41 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b05d77c5-e65b-4708-8080-abf59b533ec2 00:19:36.266 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=17ae16da-6235-4a1c-8f86-bf59fc3e8e84 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 17ae16da-6235-4a1c-8f86-bf59fc3e8e84 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:36.528 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:36.791 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:36.791 { 00:19:36.791 "name": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:36.791 "aliases": [ 00:19:36.791 "lvs/nvme0n1p0" 00:19:36.791 ], 00:19:36.791 "product_name": "Logical Volume", 00:19:36.791 "block_size": 4096, 00:19:36.791 "num_blocks": 26476544, 00:19:36.791 "uuid": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:36.791 "assigned_rate_limits": { 00:19:36.791 "rw_ios_per_sec": 0, 00:19:36.791 "rw_mbytes_per_sec": 0, 00:19:36.791 "r_mbytes_per_sec": 0, 00:19:36.791 "w_mbytes_per_sec": 0 00:19:36.791 }, 00:19:36.791 "claimed": false, 00:19:36.791 "zoned": false, 00:19:36.791 "supported_io_types": { 00:19:36.791 "read": true, 00:19:36.791 "write": true, 00:19:36.791 "unmap": true, 00:19:36.791 "flush": false, 00:19:36.791 "reset": true, 00:19:36.791 "nvme_admin": false, 00:19:36.791 "nvme_io": false, 00:19:36.791 "nvme_io_md": false, 00:19:36.791 "write_zeroes": true, 00:19:36.791 "zcopy": false, 00:19:36.791 "get_zone_info": false, 00:19:36.791 "zone_management": false, 00:19:36.791 "zone_append": false, 00:19:36.791 "compare": false, 00:19:36.791 "compare_and_write": false, 00:19:36.791 "abort": false, 00:19:36.791 "seek_hole": true, 00:19:36.791 "seek_data": true, 00:19:36.791 "copy": false, 00:19:36.791 "nvme_iov_md": false 00:19:36.791 }, 00:19:36.791 "driver_specific": { 00:19:36.791 "lvol": { 00:19:36.791 "lvol_store_uuid": "17ae16da-6235-4a1c-8f86-bf59fc3e8e84", 00:19:36.791 "base_bdev": "nvme0n1", 00:19:36.791 "thin_provision": true, 00:19:36.791 "num_allocated_clusters": 0, 00:19:36.791 "snapshot": false, 00:19:36.791 "clone": false, 00:19:36.791 "esnap_clone": false 00:19:36.791 } 00:19:36.791 } 00:19:36.791 } 00:19:36.791 ]' 00:19:36.791 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:36.791 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:36.791 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:37.053 10:18:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.314 { 00:19:37.314 "name": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:37.314 "aliases": [ 00:19:37.314 "lvs/nvme0n1p0" 00:19:37.314 ], 00:19:37.314 "product_name": "Logical Volume", 00:19:37.314 "block_size": 4096, 00:19:37.314 "num_blocks": 26476544, 00:19:37.314 "uuid": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:37.314 "assigned_rate_limits": { 00:19:37.314 "rw_ios_per_sec": 0, 00:19:37.314 "rw_mbytes_per_sec": 0, 00:19:37.314 "r_mbytes_per_sec": 0, 00:19:37.314 "w_mbytes_per_sec": 0 00:19:37.314 }, 00:19:37.314 "claimed": false, 00:19:37.314 "zoned": false, 00:19:37.314 "supported_io_types": { 00:19:37.314 "read": true, 00:19:37.314 "write": true, 00:19:37.314 "unmap": true, 00:19:37.314 "flush": false, 00:19:37.314 "reset": true, 00:19:37.314 "nvme_admin": false, 00:19:37.314 "nvme_io": false, 00:19:37.314 "nvme_io_md": false, 00:19:37.314 "write_zeroes": true, 00:19:37.314 "zcopy": false, 00:19:37.314 "get_zone_info": false, 00:19:37.314 "zone_management": false, 00:19:37.314 "zone_append": false, 00:19:37.314 "compare": false, 00:19:37.314 "compare_and_write": false, 00:19:37.314 "abort": false, 00:19:37.314 "seek_hole": true, 00:19:37.314 "seek_data": true, 00:19:37.314 "copy": false, 00:19:37.314 "nvme_iov_md": false 00:19:37.314 }, 00:19:37.314 "driver_specific": { 00:19:37.314 "lvol": { 00:19:37.314 "lvol_store_uuid": "17ae16da-6235-4a1c-8f86-bf59fc3e8e84", 00:19:37.314 "base_bdev": "nvme0n1", 00:19:37.314 "thin_provision": true, 00:19:37.314 "num_allocated_clusters": 0, 00:19:37.314 "snapshot": false, 00:19:37.314 "clone": false, 00:19:37.314 "esnap_clone": false 00:19:37.314 } 00:19:37.314 } 00:19:37.314 } 00:19:37.314 ]' 00:19:37.314 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:37.576 10:18:43 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e7b19fa1-7775-4fa1-96bb-45e16a849573 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.838 { 00:19:37.838 "name": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:37.838 "aliases": [ 00:19:37.838 "lvs/nvme0n1p0" 00:19:37.838 ], 00:19:37.838 "product_name": "Logical Volume", 00:19:37.838 "block_size": 4096, 00:19:37.838 "num_blocks": 26476544, 00:19:37.838 "uuid": "e7b19fa1-7775-4fa1-96bb-45e16a849573", 00:19:37.838 "assigned_rate_limits": { 00:19:37.838 "rw_ios_per_sec": 0, 00:19:37.838 "rw_mbytes_per_sec": 0, 00:19:37.838 "r_mbytes_per_sec": 0, 00:19:37.838 "w_mbytes_per_sec": 0 00:19:37.838 }, 00:19:37.838 "claimed": false, 00:19:37.838 "zoned": false, 00:19:37.838 "supported_io_types": { 00:19:37.838 "read": true, 00:19:37.838 "write": true, 00:19:37.838 "unmap": true, 00:19:37.838 "flush": false, 00:19:37.838 "reset": true, 00:19:37.838 "nvme_admin": false, 00:19:37.838 "nvme_io": false, 00:19:37.838 "nvme_io_md": false, 00:19:37.838 "write_zeroes": true, 00:19:37.838 "zcopy": false, 00:19:37.838 "get_zone_info": false, 00:19:37.838 "zone_management": false, 00:19:37.838 "zone_append": false, 00:19:37.838 "compare": false, 00:19:37.838 "compare_and_write": false, 00:19:37.838 "abort": false, 00:19:37.838 "seek_hole": true, 00:19:37.838 "seek_data": true, 00:19:37.838 "copy": false, 00:19:37.838 "nvme_iov_md": false 00:19:37.838 }, 00:19:37.838 "driver_specific": { 00:19:37.838 "lvol": { 00:19:37.838 "lvol_store_uuid": "17ae16da-6235-4a1c-8f86-bf59fc3e8e84", 00:19:37.838 "base_bdev": "nvme0n1", 00:19:37.838 "thin_provision": true, 00:19:37.838 "num_allocated_clusters": 0, 00:19:37.838 "snapshot": false, 00:19:37.838 "clone": false, 00:19:37.838 "esnap_clone": false 00:19:37.838 } 00:19:37.838 } 00:19:37.838 } 00:19:37.838 ]' 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.838 10:18:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.838 10:18:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:38.129 10:18:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:38.129 10:18:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:38.129 10:18:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:38.129 10:18:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:38.129 10:18:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e7b19fa1-7775-4fa1-96bb-45e16a849573 -c nvc0n1p0 --l2p_dram_limit 20 00:19:38.129 [2024-12-06 10:18:44.230727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.230800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.129 [2024-12-06 10:18:44.230817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:38.129 [2024-12-06 10:18:44.230829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.230899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.230912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.129 [2024-12-06 10:18:44.230920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:38.129 [2024-12-06 10:18:44.230932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.230951] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.129 [2024-12-06 10:18:44.231791] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.129 [2024-12-06 10:18:44.231813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.231824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.129 [2024-12-06 10:18:44.231834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:19:38.129 [2024-12-06 10:18:44.231844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.231877] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 51a43988-1d17-4174-a4b6-f5fbcfcdfc69 00:19:38.129 [2024-12-06 10:18:44.233676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.233714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:38.129 [2024-12-06 10:18:44.233732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:38.129 [2024-12-06 10:18:44.233740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.242649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.242693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.129 [2024-12-06 10:18:44.242707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.817 ms 00:19:38.129 [2024-12-06 10:18:44.242719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.242823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.242833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.129 [2024-12-06 10:18:44.242849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:38.129 [2024-12-06 10:18:44.242857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.242914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.242924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.129 [2024-12-06 10:18:44.242934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:38.129 [2024-12-06 10:18:44.242942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.242968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.129 [2024-12-06 10:18:44.247478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.247523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.129 [2024-12-06 10:18:44.247534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:19:38.129 [2024-12-06 10:18:44.247549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.247589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.247599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.129 [2024-12-06 10:18:44.247609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:38.129 [2024-12-06 10:18:44.247618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.247661] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:38.129 [2024-12-06 10:18:44.247821] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.129 [2024-12-06 10:18:44.247834] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.129 [2024-12-06 10:18:44.247847] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.129 [2024-12-06 10:18:44.247858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.129 [2024-12-06 10:18:44.247869] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.129 [2024-12-06 10:18:44.247878] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:38.129 [2024-12-06 10:18:44.247888] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.129 [2024-12-06 10:18:44.247895] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.129 [2024-12-06 10:18:44.247906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.129 [2024-12-06 10:18:44.247916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.247926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.129 [2024-12-06 10:18:44.247934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:38.129 [2024-12-06 10:18:44.247943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.129 [2024-12-06 10:18:44.248044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.129 [2024-12-06 10:18:44.248055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.129 [2024-12-06 10:18:44.248062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:38.130 [2024-12-06 10:18:44.248073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.130 [2024-12-06 10:18:44.248164] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.130 [2024-12-06 10:18:44.248179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.130 [2024-12-06 10:18:44.248188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.130 [2024-12-06 10:18:44.248215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.130 [2024-12-06 10:18:44.248238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.130 [2024-12-06 10:18:44.248254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.130 [2024-12-06 10:18:44.248273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:38.130 [2024-12-06 10:18:44.248280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.130 [2024-12-06 10:18:44.248293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.130 [2024-12-06 10:18:44.248299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:38.130 [2024-12-06 10:18:44.248310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.130 [2024-12-06 10:18:44.248326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.130 [2024-12-06 10:18:44.248349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.130 [2024-12-06 10:18:44.248373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.130 [2024-12-06 10:18:44.248395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.130 [2024-12-06 10:18:44.248420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.130 [2024-12-06 10:18:44.248458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.130 [2024-12-06 10:18:44.248474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.130 [2024-12-06 10:18:44.248483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:38.130 [2024-12-06 10:18:44.248490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.130 [2024-12-06 10:18:44.248500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.130 [2024-12-06 10:18:44.248507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:38.130 [2024-12-06 10:18:44.248515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.130 [2024-12-06 10:18:44.248530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:38.130 [2024-12-06 10:18:44.248537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248545] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.130 [2024-12-06 10:18:44.248553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.130 [2024-12-06 10:18:44.248565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.130 [2024-12-06 10:18:44.248587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.130 [2024-12-06 10:18:44.248594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.130 [2024-12-06 10:18:44.248653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.130 [2024-12-06 10:18:44.248661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.130 [2024-12-06 10:18:44.248669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.130 [2024-12-06 10:18:44.248676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.130 [2024-12-06 10:18:44.248687] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.130 [2024-12-06 10:18:44.248697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.130 [2024-12-06 10:18:44.248708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:38.130 [2024-12-06 10:18:44.248716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:38.130 [2024-12-06 10:18:44.248726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:38.130 [2024-12-06 10:18:44.248733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:38.130 [2024-12-06 10:18:44.248742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:38.130 [2024-12-06 10:18:44.248750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:38.130 [2024-12-06 10:18:44.248759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:38.130 [2024-12-06 10:18:44.248767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:38.130 [2024-12-06 10:18:44.248780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:38.130 [2024-12-06 10:18:44.248786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:38.130 [2024-12-06 10:18:44.248796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:38.130 [2024-12-06 10:18:44.248804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:38.130 [2024-12-06 10:18:44.248813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:38.130 [2024-12-06 10:18:44.248820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:38.130 [2024-12-06 10:18:44.248829] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.130 [2024-12-06 10:18:44.248838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.131 [2024-12-06 10:18:44.248851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.131 [2024-12-06 10:18:44.248858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.131 [2024-12-06 10:18:44.248867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.131 [2024-12-06 10:18:44.248875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.131 [2024-12-06 10:18:44.248885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.131 [2024-12-06 10:18:44.248892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.131 [2024-12-06 10:18:44.248903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.785 ms 00:19:38.131 [2024-12-06 10:18:44.248911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.131 [2024-12-06 10:18:44.248948] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:38.131 [2024-12-06 10:18:44.248958] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:42.352 [2024-12-06 10:18:48.280147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.280213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:42.352 [2024-12-06 10:18:48.280231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4031.182 ms 00:19:42.352 [2024-12-06 10:18:48.280240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.307356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.307406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.352 [2024-12-06 10:18:48.307420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.877 ms 00:19:42.352 [2024-12-06 10:18:48.307428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.307572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.307583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.352 [2024-12-06 10:18:48.307597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:42.352 [2024-12-06 10:18:48.307605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.350365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.350594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.352 [2024-12-06 10:18:48.350624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.724 ms 00:19:42.352 [2024-12-06 10:18:48.350634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.350681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.350690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.352 [2024-12-06 10:18:48.350702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:42.352 [2024-12-06 10:18:48.350712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.351212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.351234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.352 [2024-12-06 10:18:48.351246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:19:42.352 [2024-12-06 10:18:48.351254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.351374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.351384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.352 [2024-12-06 10:18:48.351397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:42.352 [2024-12-06 10:18:48.351404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.366411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.366468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.352 [2024-12-06 10:18:48.366482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.985 ms 00:19:42.352 [2024-12-06 10:18:48.366499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.379791] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:42.352 [2024-12-06 10:18:48.387159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.387212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:42.352 [2024-12-06 10:18:48.387224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.559 ms 00:19:42.352 [2024-12-06 10:18:48.387234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.491539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.491607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:42.352 [2024-12-06 10:18:48.491622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.271 ms 00:19:42.352 [2024-12-06 10:18:48.491634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.352 [2024-12-06 10:18:48.491848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.352 [2024-12-06 10:18:48.491867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:42.352 [2024-12-06 10:18:48.491878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:42.352 [2024-12-06 10:18:48.491892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.615 [2024-12-06 10:18:48.518933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.615 [2024-12-06 10:18:48.518983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:42.615 [2024-12-06 10:18:48.518997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.986 ms 00:19:42.615 [2024-12-06 10:18:48.519009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.615 [2024-12-06 10:18:48.545058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.615 [2024-12-06 10:18:48.545110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:42.615 [2024-12-06 10:18:48.545124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.997 ms 00:19:42.615 [2024-12-06 10:18:48.545134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.615 [2024-12-06 10:18:48.545776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.615 [2024-12-06 10:18:48.545798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.615 [2024-12-06 10:18:48.545808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:19:42.615 [2024-12-06 10:18:48.545818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.615 [2024-12-06 10:18:48.635391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.615 [2024-12-06 10:18:48.635465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:42.615 [2024-12-06 10:18:48.635480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.514 ms 00:19:42.616 [2024-12-06 10:18:48.635491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.663985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.616 [2024-12-06 10:18:48.664053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:42.616 [2024-12-06 10:18:48.664071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.394 ms 00:19:42.616 [2024-12-06 10:18:48.664082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.691141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.616 [2024-12-06 10:18:48.691196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:42.616 [2024-12-06 10:18:48.691209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.006 ms 00:19:42.616 [2024-12-06 10:18:48.691218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.718101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.616 [2024-12-06 10:18:48.718156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:42.616 [2024-12-06 10:18:48.718169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.832 ms 00:19:42.616 [2024-12-06 10:18:48.718179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.718235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.616 [2024-12-06 10:18:48.718252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:42.616 [2024-12-06 10:18:48.718262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:42.616 [2024-12-06 10:18:48.718273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.718371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.616 [2024-12-06 10:18:48.718385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:42.616 [2024-12-06 10:18:48.718395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:42.616 [2024-12-06 10:18:48.718405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.616 [2024-12-06 10:18:48.719605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4488.372 ms, result 0 00:19:42.616 { 00:19:42.616 "name": "ftl0", 00:19:42.616 "uuid": "51a43988-1d17-4174-a4b6-f5fbcfcdfc69" 00:19:42.616 } 00:19:42.616 10:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:42.616 10:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:42.616 10:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:42.877 10:18:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:43.138 [2024-12-06 10:18:49.051718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:43.138 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:43.138 Zero copy mechanism will not be used. 00:19:43.138 Running I/O for 4 seconds... 00:19:45.022 1025.00 IOPS, 68.07 MiB/s [2024-12-06T10:18:52.136Z] 846.00 IOPS, 56.18 MiB/s [2024-12-06T10:18:53.083Z] 819.00 IOPS, 54.39 MiB/s [2024-12-06T10:18:53.083Z] 882.25 IOPS, 58.59 MiB/s 00:19:46.916 Latency(us) 00:19:46.916 [2024-12-06T10:18:53.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.916 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:46.916 ftl0 : 4.00 881.94 58.57 0.00 0.00 1198.12 261.51 3503.66 00:19:46.916 [2024-12-06T10:18:53.083Z] =================================================================================================================== 00:19:46.916 [2024-12-06T10:18:53.083Z] Total : 881.94 58.57 0.00 0.00 1198.12 261.51 3503.66 00:19:46.916 [2024-12-06 10:18:53.063770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:46.916 { 00:19:46.916 "results": [ 00:19:46.916 { 00:19:46.916 "job": "ftl0", 00:19:46.916 "core_mask": "0x1", 00:19:46.916 "workload": "randwrite", 00:19:46.916 "status": "finished", 00:19:46.916 "queue_depth": 1, 00:19:46.916 "io_size": 69632, 00:19:46.916 "runtime": 4.002521, 00:19:46.916 "iops": 881.9441546965026, 00:19:46.916 "mibps": 58.56660402281462, 00:19:46.916 "io_failed": 0, 00:19:46.916 "io_timeout": 0, 00:19:46.916 "avg_latency_us": 1198.1190499019394, 00:19:46.916 "min_latency_us": 261.51384615384615, 00:19:46.916 "max_latency_us": 3503.6553846153847 00:19:46.916 } 00:19:46.916 ], 00:19:46.916 "core_count": 1 00:19:46.916 } 00:19:47.178 10:18:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:47.178 [2024-12-06 10:18:53.184434] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:47.178 Running I/O for 4 seconds... 00:19:49.060 5862.00 IOPS, 22.90 MiB/s [2024-12-06T10:18:56.217Z] 5257.50 IOPS, 20.54 MiB/s [2024-12-06T10:18:57.606Z] 5179.67 IOPS, 20.23 MiB/s [2024-12-06T10:18:57.606Z] 5014.00 IOPS, 19.59 MiB/s 00:19:51.439 Latency(us) 00:19:51.439 [2024-12-06T10:18:57.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.439 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:51.439 ftl0 : 4.04 4994.56 19.51 0.00 0.00 25522.93 310.35 48799.11 00:19:51.439 [2024-12-06T10:18:57.606Z] =================================================================================================================== 00:19:51.439 [2024-12-06T10:18:57.606Z] Total : 4994.56 19.51 0.00 0.00 25522.93 0.00 48799.11 00:19:51.439 [2024-12-06 10:18:57.235869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:51.439 { 00:19:51.439 "results": [ 00:19:51.439 { 00:19:51.439 "job": "ftl0", 00:19:51.439 "core_mask": "0x1", 00:19:51.439 "workload": "randwrite", 00:19:51.439 "status": "finished", 00:19:51.439 "queue_depth": 128, 00:19:51.439 "io_size": 4096, 00:19:51.439 "runtime": 4.040994, 00:19:51.439 "iops": 4994.563218851599, 00:19:51.439 "mibps": 19.51001257363906, 00:19:51.439 "io_failed": 0, 00:19:51.439 "io_timeout": 0, 00:19:51.439 "avg_latency_us": 25522.926631780745, 00:19:51.439 "min_latency_us": 310.35076923076923, 00:19:51.439 "max_latency_us": 48799.11384615384 00:19:51.439 } 00:19:51.439 ], 00:19:51.439 "core_count": 1 00:19:51.439 } 00:19:51.439 10:18:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:51.439 [2024-12-06 10:18:57.348494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:51.439 Running I/O for 4 seconds... 00:19:53.326 4137.00 IOPS, 16.16 MiB/s [2024-12-06T10:19:00.438Z] 4169.00 IOPS, 16.29 MiB/s [2024-12-06T10:19:01.382Z] 4177.00 IOPS, 16.32 MiB/s [2024-12-06T10:19:01.382Z] 4182.00 IOPS, 16.34 MiB/s 00:19:55.215 Latency(us) 00:19:55.215 [2024-12-06T10:19:01.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.215 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:55.215 Verification LBA range: start 0x0 length 0x1400000 00:19:55.215 ftl0 : 4.02 4194.44 16.38 0.00 0.00 30416.95 450.56 41338.09 00:19:55.215 [2024-12-06T10:19:01.382Z] =================================================================================================================== 00:19:55.215 [2024-12-06T10:19:01.382Z] Total : 4194.44 16.38 0.00 0.00 30416.95 0.00 41338.09 00:19:55.477 [2024-12-06 10:19:01.384043] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:55.477 { 00:19:55.477 "results": [ 00:19:55.477 { 00:19:55.477 "job": "ftl0", 00:19:55.477 "core_mask": "0x1", 00:19:55.477 "workload": "verify", 00:19:55.477 "status": "finished", 00:19:55.477 "verify_range": { 00:19:55.477 "start": 0, 00:19:55.477 "length": 20971520 00:19:55.477 }, 00:19:55.477 "queue_depth": 128, 00:19:55.477 "io_size": 4096, 00:19:55.477 "runtime": 4.018649, 00:19:55.477 "iops": 4194.444451356662, 00:19:55.477 "mibps": 16.384548638111962, 00:19:55.477 "io_failed": 0, 00:19:55.477 "io_timeout": 0, 00:19:55.477 "avg_latency_us": 30416.94538498047, 00:19:55.477 "min_latency_us": 450.56, 00:19:55.477 "max_latency_us": 41338.092307692306 00:19:55.477 } 00:19:55.477 ], 00:19:55.477 "core_count": 1 00:19:55.477 } 00:19:55.477 10:19:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:55.477 [2024-12-06 10:19:01.602826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.477 [2024-12-06 10:19:01.602892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:55.477 [2024-12-06 10:19:01.602906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.477 [2024-12-06 10:19:01.602917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.477 [2024-12-06 10:19:01.602939] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.477 [2024-12-06 10:19:01.605996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.477 [2024-12-06 10:19:01.606043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:55.477 [2024-12-06 10:19:01.606056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:19:55.477 [2024-12-06 10:19:01.606064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.477 [2024-12-06 10:19:01.609260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.477 [2024-12-06 10:19:01.609305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:55.477 [2024-12-06 10:19:01.609322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:19:55.477 [2024-12-06 10:19:01.609330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.739 [2024-12-06 10:19:01.844974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.739 [2024-12-06 10:19:01.845037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:55.739 [2024-12-06 10:19:01.845059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 235.616 ms 00:19:55.739 [2024-12-06 10:19:01.845069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.739 [2024-12-06 10:19:01.851269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.739 [2024-12-06 10:19:01.851314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:55.739 [2024-12-06 10:19:01.851330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.152 ms 00:19:55.739 [2024-12-06 10:19:01.851344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.739 [2024-12-06 10:19:01.877694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.739 [2024-12-06 10:19:01.877748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:55.739 [2024-12-06 10:19:01.877763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.287 ms 00:19:55.739 [2024-12-06 10:19:01.877771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.739 [2024-12-06 10:19:01.895214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.739 [2024-12-06 10:19:01.895269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:55.739 [2024-12-06 10:19:01.895285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.393 ms 00:19:55.739 [2024-12-06 10:19:01.895293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.739 [2024-12-06 10:19:01.895472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.739 [2024-12-06 10:19:01.895485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:55.739 [2024-12-06 10:19:01.895500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:55.739 [2024-12-06 10:19:01.895508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.002 [2024-12-06 10:19:01.921134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.002 [2024-12-06 10:19:01.921182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.002 [2024-12-06 10:19:01.921197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.604 ms 00:19:56.002 [2024-12-06 10:19:01.921205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.002 [2024-12-06 10:19:01.946959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.002 [2024-12-06 10:19:01.947006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.002 [2024-12-06 10:19:01.947020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.703 ms 00:19:56.002 [2024-12-06 10:19:01.947027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.002 [2024-12-06 10:19:01.971361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.002 [2024-12-06 10:19:01.971412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.002 [2024-12-06 10:19:01.971426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.283 ms 00:19:56.002 [2024-12-06 10:19:01.971433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.002 [2024-12-06 10:19:01.995603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.002 [2024-12-06 10:19:01.995650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.002 [2024-12-06 10:19:01.995668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.069 ms 00:19:56.002 [2024-12-06 10:19:01.995675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.002 [2024-12-06 10:19:01.995722] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.002 [2024-12-06 10:19:01.995737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.995998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.002 [2024-12-06 10:19:01.996170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.003 [2024-12-06 10:19:01.996684] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.003 [2024-12-06 10:19:01.996694] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 51a43988-1d17-4174-a4b6-f5fbcfcdfc69 00:19:56.003 [2024-12-06 10:19:01.996705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.003 [2024-12-06 10:19:01.996714] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.003 [2024-12-06 10:19:01.996722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.003 [2024-12-06 10:19:01.996732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.003 [2024-12-06 10:19:01.996740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.003 [2024-12-06 10:19:01.996750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.003 [2024-12-06 10:19:01.996757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.003 [2024-12-06 10:19:01.996767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.003 [2024-12-06 10:19:01.996774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.003 [2024-12-06 10:19:01.996806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.003 [2024-12-06 10:19:01.996816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.003 [2024-12-06 10:19:01.996827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:19:56.003 [2024-12-06 10:19:01.996835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.003 [2024-12-06 10:19:02.010441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.003 [2024-12-06 10:19:02.010493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.003 [2024-12-06 10:19:02.010507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:19:56.003 [2024-12-06 10:19:02.010516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.003 [2024-12-06 10:19:02.010910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.003 [2024-12-06 10:19:02.010934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.003 [2024-12-06 10:19:02.010945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:19:56.003 [2024-12-06 10:19:02.010953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.003 [2024-12-06 10:19:02.049611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.003 [2024-12-06 10:19:02.049662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.003 [2024-12-06 10:19:02.049679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.003 [2024-12-06 10:19:02.049687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.003 [2024-12-06 10:19:02.049751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.003 [2024-12-06 10:19:02.049760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.003 [2024-12-06 10:19:02.049770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.003 [2024-12-06 10:19:02.049777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.003 [2024-12-06 10:19:02.049881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.003 [2024-12-06 10:19:02.049893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.004 [2024-12-06 10:19:02.049903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.004 [2024-12-06 10:19:02.049911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.004 [2024-12-06 10:19:02.049929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.004 [2024-12-06 10:19:02.049937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.004 [2024-12-06 10:19:02.049947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.004 [2024-12-06 10:19:02.049954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.004 [2024-12-06 10:19:02.133944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.004 [2024-12-06 10:19:02.133998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:56.004 [2024-12-06 10:19:02.134016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.004 [2024-12-06 10:19:02.134025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:56.266 [2024-12-06 10:19:02.203485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.266 [2024-12-06 10:19:02.203604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.266 [2024-12-06 10:19:02.203701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.266 [2024-12-06 10:19:02.203833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:56.266 [2024-12-06 10:19:02.203894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.203943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.203971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.266 [2024-12-06 10:19:02.203982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.203998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.204062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.266 [2024-12-06 10:19:02.204106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.266 [2024-12-06 10:19:02.204117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.266 [2024-12-06 10:19:02.204125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.266 [2024-12-06 10:19:02.204269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 601.398 ms, result 0 00:19:56.266 true 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76136 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76136 ']' 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76136 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76136 00:19:56.266 killing process with pid 76136 00:19:56.266 Received shutdown signal, test time was about 4.000000 seconds 00:19:56.266 00:19:56.266 Latency(us) 00:19:56.266 [2024-12-06T10:19:02.433Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.266 [2024-12-06T10:19:02.433Z] =================================================================================================================== 00:19:56.266 [2024-12-06T10:19:02.433Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.266 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.267 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.267 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76136' 00:19:56.267 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76136 00:19:56.267 10:19:02 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76136 00:19:57.205 Remove shared memory files 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:57.205 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:57.206 10:19:03 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:57.206 00:19:57.206 real 0m23.116s 00:19:57.206 user 0m25.734s 00:19:57.206 sys 0m0.985s 00:19:57.206 10:19:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.206 ************************************ 00:19:57.206 END TEST ftl_bdevperf 00:19:57.206 ************************************ 00:19:57.206 10:19:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:57.206 10:19:03 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:57.206 10:19:03 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:57.206 10:19:03 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.206 10:19:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:57.206 ************************************ 00:19:57.206 START TEST ftl_trim 00:19:57.206 ************************************ 00:19:57.206 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:57.464 * Looking for test storage... 00:19:57.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.464 10:19:03 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:57.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.464 --rc genhtml_branch_coverage=1 00:19:57.464 --rc genhtml_function_coverage=1 00:19:57.464 --rc genhtml_legend=1 00:19:57.464 --rc geninfo_all_blocks=1 00:19:57.464 --rc geninfo_unexecuted_blocks=1 00:19:57.464 00:19:57.464 ' 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:57.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.464 --rc genhtml_branch_coverage=1 00:19:57.464 --rc genhtml_function_coverage=1 00:19:57.464 --rc genhtml_legend=1 00:19:57.464 --rc geninfo_all_blocks=1 00:19:57.464 --rc geninfo_unexecuted_blocks=1 00:19:57.464 00:19:57.464 ' 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:57.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.464 --rc genhtml_branch_coverage=1 00:19:57.464 --rc genhtml_function_coverage=1 00:19:57.464 --rc genhtml_legend=1 00:19:57.464 --rc geninfo_all_blocks=1 00:19:57.464 --rc geninfo_unexecuted_blocks=1 00:19:57.464 00:19:57.464 ' 00:19:57.464 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:57.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.464 --rc genhtml_branch_coverage=1 00:19:57.464 --rc genhtml_function_coverage=1 00:19:57.464 --rc genhtml_legend=1 00:19:57.464 --rc geninfo_all_blocks=1 00:19:57.464 --rc geninfo_unexecuted_blocks=1 00:19:57.464 00:19:57.464 ' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.464 10:19:03 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.465 10:19:03 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:57.465 10:19:03 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76495 00:19:57.465 10:19:03 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76495 00:19:57.465 10:19:03 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76495 ']' 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.465 10:19:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:57.465 [2024-12-06 10:19:03.573501] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:19:57.465 [2024-12-06 10:19:03.573617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76495 ] 00:19:57.725 [2024-12-06 10:19:03.731219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:57.725 [2024-12-06 10:19:03.834658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.725 [2024-12-06 10:19:03.835217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.725 [2024-12-06 10:19:03.835288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:58.669 10:19:04 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:58.669 10:19:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:58.929 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:58.929 { 00:19:58.929 "name": "nvme0n1", 00:19:58.929 "aliases": [ 00:19:58.929 "cc72ebe8-2d56-4e39-a367-389cbc4bcf0c" 00:19:58.929 ], 00:19:58.929 "product_name": "NVMe disk", 00:19:58.929 "block_size": 4096, 00:19:58.929 "num_blocks": 1310720, 00:19:58.929 "uuid": "cc72ebe8-2d56-4e39-a367-389cbc4bcf0c", 00:19:58.929 "numa_id": -1, 00:19:58.929 "assigned_rate_limits": { 00:19:58.929 "rw_ios_per_sec": 0, 00:19:58.929 "rw_mbytes_per_sec": 0, 00:19:58.929 "r_mbytes_per_sec": 0, 00:19:58.930 "w_mbytes_per_sec": 0 00:19:58.930 }, 00:19:58.930 "claimed": true, 00:19:58.930 "claim_type": "read_many_write_one", 00:19:58.930 "zoned": false, 00:19:58.930 "supported_io_types": { 00:19:58.930 "read": true, 00:19:58.930 "write": true, 00:19:58.930 "unmap": true, 00:19:58.930 "flush": true, 00:19:58.930 "reset": true, 00:19:58.930 "nvme_admin": true, 00:19:58.930 "nvme_io": true, 00:19:58.930 "nvme_io_md": false, 00:19:58.930 "write_zeroes": true, 00:19:58.930 "zcopy": false, 00:19:58.930 "get_zone_info": false, 00:19:58.930 "zone_management": false, 00:19:58.930 "zone_append": false, 00:19:58.930 "compare": true, 00:19:58.930 "compare_and_write": false, 00:19:58.930 "abort": true, 00:19:58.930 "seek_hole": false, 00:19:58.930 "seek_data": false, 00:19:58.930 "copy": true, 00:19:58.930 "nvme_iov_md": false 00:19:58.930 }, 00:19:58.930 "driver_specific": { 00:19:58.930 "nvme": [ 00:19:58.930 { 00:19:58.930 "pci_address": "0000:00:11.0", 00:19:58.930 "trid": { 00:19:58.930 "trtype": "PCIe", 00:19:58.930 "traddr": "0000:00:11.0" 00:19:58.930 }, 00:19:58.930 "ctrlr_data": { 00:19:58.930 "cntlid": 0, 00:19:58.930 "vendor_id": "0x1b36", 00:19:58.930 "model_number": "QEMU NVMe Ctrl", 00:19:58.930 "serial_number": "12341", 00:19:58.930 "firmware_revision": "8.0.0", 00:19:58.930 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:58.930 "oacs": { 00:19:58.930 "security": 0, 00:19:58.930 "format": 1, 00:19:58.930 "firmware": 0, 00:19:58.930 "ns_manage": 1 00:19:58.930 }, 00:19:58.930 "multi_ctrlr": false, 00:19:58.930 "ana_reporting": false 00:19:58.930 }, 00:19:58.930 "vs": { 00:19:58.930 "nvme_version": "1.4" 00:19:58.930 }, 00:19:58.930 "ns_data": { 00:19:58.930 "id": 1, 00:19:58.930 "can_share": false 00:19:58.930 } 00:19:58.930 } 00:19:58.930 ], 00:19:58.930 "mp_policy": "active_passive" 00:19:58.930 } 00:19:58.930 } 00:19:58.930 ]' 00:19:58.930 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.190 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.190 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.190 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:59.190 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:59.190 10:19:05 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=17ae16da-6235-4a1c-8f86-bf59fc3e8e84 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:59.190 10:19:05 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17ae16da-6235-4a1c-8f86-bf59fc3e8e84 00:19:59.466 10:19:05 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:59.747 10:19:05 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=78c9fde1-05a3-4585-82ad-c40d72b2f9bf 00:19:59.747 10:19:05 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 78c9fde1-05a3-4585-82ad-c40d72b2f9bf 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:20:00.009 10:19:06 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.009 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.009 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:00.009 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:00.009 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:00.009 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:00.271 { 00:20:00.271 "name": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:00.271 "aliases": [ 00:20:00.271 "lvs/nvme0n1p0" 00:20:00.271 ], 00:20:00.271 "product_name": "Logical Volume", 00:20:00.271 "block_size": 4096, 00:20:00.271 "num_blocks": 26476544, 00:20:00.271 "uuid": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:00.271 "assigned_rate_limits": { 00:20:00.271 "rw_ios_per_sec": 0, 00:20:00.271 "rw_mbytes_per_sec": 0, 00:20:00.271 "r_mbytes_per_sec": 0, 00:20:00.271 "w_mbytes_per_sec": 0 00:20:00.271 }, 00:20:00.271 "claimed": false, 00:20:00.271 "zoned": false, 00:20:00.271 "supported_io_types": { 00:20:00.271 "read": true, 00:20:00.271 "write": true, 00:20:00.271 "unmap": true, 00:20:00.271 "flush": false, 00:20:00.271 "reset": true, 00:20:00.271 "nvme_admin": false, 00:20:00.271 "nvme_io": false, 00:20:00.271 "nvme_io_md": false, 00:20:00.271 "write_zeroes": true, 00:20:00.271 "zcopy": false, 00:20:00.271 "get_zone_info": false, 00:20:00.271 "zone_management": false, 00:20:00.271 "zone_append": false, 00:20:00.271 "compare": false, 00:20:00.271 "compare_and_write": false, 00:20:00.271 "abort": false, 00:20:00.271 "seek_hole": true, 00:20:00.271 "seek_data": true, 00:20:00.271 "copy": false, 00:20:00.271 "nvme_iov_md": false 00:20:00.271 }, 00:20:00.271 "driver_specific": { 00:20:00.271 "lvol": { 00:20:00.271 "lvol_store_uuid": "78c9fde1-05a3-4585-82ad-c40d72b2f9bf", 00:20:00.271 "base_bdev": "nvme0n1", 00:20:00.271 "thin_provision": true, 00:20:00.271 "num_allocated_clusters": 0, 00:20:00.271 "snapshot": false, 00:20:00.271 "clone": false, 00:20:00.271 "esnap_clone": false 00:20:00.271 } 00:20:00.271 } 00:20:00.271 } 00:20:00.271 ]' 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:00.271 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:00.271 10:19:06 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:20:00.271 10:19:06 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:20:00.271 10:19:06 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:00.532 10:19:06 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:00.532 10:19:06 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:00.532 10:19:06 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.532 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.532 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:00.532 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:00.532 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:00.532 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:00.793 { 00:20:00.793 "name": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:00.793 "aliases": [ 00:20:00.793 "lvs/nvme0n1p0" 00:20:00.793 ], 00:20:00.793 "product_name": "Logical Volume", 00:20:00.793 "block_size": 4096, 00:20:00.793 "num_blocks": 26476544, 00:20:00.793 "uuid": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:00.793 "assigned_rate_limits": { 00:20:00.793 "rw_ios_per_sec": 0, 00:20:00.793 "rw_mbytes_per_sec": 0, 00:20:00.793 "r_mbytes_per_sec": 0, 00:20:00.793 "w_mbytes_per_sec": 0 00:20:00.793 }, 00:20:00.793 "claimed": false, 00:20:00.793 "zoned": false, 00:20:00.793 "supported_io_types": { 00:20:00.793 "read": true, 00:20:00.793 "write": true, 00:20:00.793 "unmap": true, 00:20:00.793 "flush": false, 00:20:00.793 "reset": true, 00:20:00.793 "nvme_admin": false, 00:20:00.793 "nvme_io": false, 00:20:00.793 "nvme_io_md": false, 00:20:00.793 "write_zeroes": true, 00:20:00.793 "zcopy": false, 00:20:00.793 "get_zone_info": false, 00:20:00.793 "zone_management": false, 00:20:00.793 "zone_append": false, 00:20:00.793 "compare": false, 00:20:00.793 "compare_and_write": false, 00:20:00.793 "abort": false, 00:20:00.793 "seek_hole": true, 00:20:00.793 "seek_data": true, 00:20:00.793 "copy": false, 00:20:00.793 "nvme_iov_md": false 00:20:00.793 }, 00:20:00.793 "driver_specific": { 00:20:00.793 "lvol": { 00:20:00.793 "lvol_store_uuid": "78c9fde1-05a3-4585-82ad-c40d72b2f9bf", 00:20:00.793 "base_bdev": "nvme0n1", 00:20:00.793 "thin_provision": true, 00:20:00.793 "num_allocated_clusters": 0, 00:20:00.793 "snapshot": false, 00:20:00.793 "clone": false, 00:20:00.793 "esnap_clone": false 00:20:00.793 } 00:20:00.793 } 00:20:00.793 } 00:20:00.793 ]' 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:00.793 10:19:06 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:00.793 10:19:06 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:20:00.793 10:19:06 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:01.054 10:19:07 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:20:01.054 10:19:07 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:20:01.054 10:19:07 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:01.054 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:01.054 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:01.054 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:20:01.054 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:20:01.054 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 00:20:01.313 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:01.313 { 00:20:01.313 "name": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:01.313 "aliases": [ 00:20:01.313 "lvs/nvme0n1p0" 00:20:01.313 ], 00:20:01.313 "product_name": "Logical Volume", 00:20:01.313 "block_size": 4096, 00:20:01.313 "num_blocks": 26476544, 00:20:01.313 "uuid": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:01.313 "assigned_rate_limits": { 00:20:01.313 "rw_ios_per_sec": 0, 00:20:01.313 "rw_mbytes_per_sec": 0, 00:20:01.313 "r_mbytes_per_sec": 0, 00:20:01.313 "w_mbytes_per_sec": 0 00:20:01.314 }, 00:20:01.314 "claimed": false, 00:20:01.314 "zoned": false, 00:20:01.314 "supported_io_types": { 00:20:01.314 "read": true, 00:20:01.314 "write": true, 00:20:01.314 "unmap": true, 00:20:01.314 "flush": false, 00:20:01.314 "reset": true, 00:20:01.314 "nvme_admin": false, 00:20:01.314 "nvme_io": false, 00:20:01.314 "nvme_io_md": false, 00:20:01.314 "write_zeroes": true, 00:20:01.314 "zcopy": false, 00:20:01.314 "get_zone_info": false, 00:20:01.314 "zone_management": false, 00:20:01.314 "zone_append": false, 00:20:01.314 "compare": false, 00:20:01.314 "compare_and_write": false, 00:20:01.314 "abort": false, 00:20:01.314 "seek_hole": true, 00:20:01.314 "seek_data": true, 00:20:01.314 "copy": false, 00:20:01.314 "nvme_iov_md": false 00:20:01.314 }, 00:20:01.314 "driver_specific": { 00:20:01.314 "lvol": { 00:20:01.314 "lvol_store_uuid": "78c9fde1-05a3-4585-82ad-c40d72b2f9bf", 00:20:01.314 "base_bdev": "nvme0n1", 00:20:01.314 "thin_provision": true, 00:20:01.314 "num_allocated_clusters": 0, 00:20:01.314 "snapshot": false, 00:20:01.314 "clone": false, 00:20:01.314 "esnap_clone": false 00:20:01.314 } 00:20:01.314 } 00:20:01.314 } 00:20:01.314 ]' 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:01.314 10:19:07 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:20:01.314 10:19:07 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:20:01.314 10:19:07 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1e5c987b-c2f9-4198-9deb-a3aceeb03fb5 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:20:01.574 [2024-12-06 10:19:07.539594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.539634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.574 [2024-12-06 10:19:07.539648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.574 [2024-12-06 10:19:07.539654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.574 [2024-12-06 10:19:07.541847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.541878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.574 [2024-12-06 10:19:07.541887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:20:01.574 [2024-12-06 10:19:07.541893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.574 [2024-12-06 10:19:07.541960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.574 [2024-12-06 10:19:07.542543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.574 [2024-12-06 10:19:07.542566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.542573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.574 [2024-12-06 10:19:07.542581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:20:01.574 [2024-12-06 10:19:07.542587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.574 [2024-12-06 10:19:07.542840] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:01.574 [2024-12-06 10:19:07.543722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.543748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:01.574 [2024-12-06 10:19:07.543756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:01.574 [2024-12-06 10:19:07.543763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.574 [2024-12-06 10:19:07.548400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.548429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.574 [2024-12-06 10:19:07.548436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.579 ms 00:20:01.574 [2024-12-06 10:19:07.548444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.574 [2024-12-06 10:19:07.548549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.574 [2024-12-06 10:19:07.548560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.574 [2024-12-06 10:19:07.548567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:01.575 [2024-12-06 10:19:07.548576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.548606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.575 [2024-12-06 10:19:07.548614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.575 [2024-12-06 10:19:07.548619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.575 [2024-12-06 10:19:07.548628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.548651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.575 [2024-12-06 10:19:07.551398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.575 [2024-12-06 10:19:07.551424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.575 [2024-12-06 10:19:07.551434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:20:01.575 [2024-12-06 10:19:07.551440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.551486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.575 [2024-12-06 10:19:07.551503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.575 [2024-12-06 10:19:07.551510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:01.575 [2024-12-06 10:19:07.551516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.551540] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:01.575 [2024-12-06 10:19:07.551648] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.575 [2024-12-06 10:19:07.551665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.575 [2024-12-06 10:19:07.551674] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:01.575 [2024-12-06 10:19:07.551683] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.575 [2024-12-06 10:19:07.551690] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.575 [2024-12-06 10:19:07.551698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.575 [2024-12-06 10:19:07.551703] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.575 [2024-12-06 10:19:07.551713] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.575 [2024-12-06 10:19:07.551718] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.575 [2024-12-06 10:19:07.551726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.575 [2024-12-06 10:19:07.551732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.575 [2024-12-06 10:19:07.551739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:20:01.575 [2024-12-06 10:19:07.551744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.551821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.575 [2024-12-06 10:19:07.551827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.575 [2024-12-06 10:19:07.551834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:01.575 [2024-12-06 10:19:07.551840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.575 [2024-12-06 10:19:07.551932] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.575 [2024-12-06 10:19:07.551943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.575 [2024-12-06 10:19:07.551951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.575 [2024-12-06 10:19:07.551956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.551964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.575 [2024-12-06 10:19:07.551969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.551976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.575 [2024-12-06 10:19:07.551981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.575 [2024-12-06 10:19:07.551988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.575 [2024-12-06 10:19:07.551993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.575 [2024-12-06 10:19:07.552000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.575 [2024-12-06 10:19:07.552011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.575 [2024-12-06 10:19:07.552019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.575 [2024-12-06 10:19:07.552024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.575 [2024-12-06 10:19:07.552030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:01.575 [2024-12-06 10:19:07.552035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.575 [2024-12-06 10:19:07.552049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.575 [2024-12-06 10:19:07.552069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.575 [2024-12-06 10:19:07.552085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.575 [2024-12-06 10:19:07.552103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.575 [2024-12-06 10:19:07.552120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.575 [2024-12-06 10:19:07.552139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.575 [2024-12-06 10:19:07.552151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.575 [2024-12-06 10:19:07.552156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:01.575 [2024-12-06 10:19:07.552162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.575 [2024-12-06 10:19:07.552167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.575 [2024-12-06 10:19:07.552175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:01.575 [2024-12-06 10:19:07.552180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.575 [2024-12-06 10:19:07.552191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:01.575 [2024-12-06 10:19:07.552197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552202] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.575 [2024-12-06 10:19:07.552209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.575 [2024-12-06 10:19:07.552215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.575 [2024-12-06 10:19:07.552227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.575 [2024-12-06 10:19:07.552235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.575 [2024-12-06 10:19:07.552240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.575 [2024-12-06 10:19:07.552248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.575 [2024-12-06 10:19:07.552253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.575 [2024-12-06 10:19:07.552260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.575 [2024-12-06 10:19:07.552266] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.575 [2024-12-06 10:19:07.552277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.575 [2024-12-06 10:19:07.552285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.575 [2024-12-06 10:19:07.552292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:01.575 [2024-12-06 10:19:07.552298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.575 [2024-12-06 10:19:07.552304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:01.575 [2024-12-06 10:19:07.552310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:01.575 [2024-12-06 10:19:07.552316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:01.575 [2024-12-06 10:19:07.552322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:01.575 [2024-12-06 10:19:07.552329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:01.575 [2024-12-06 10:19:07.552335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:01.575 [2024-12-06 10:19:07.552343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:01.575 [2024-12-06 10:19:07.552348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:01.575 [2024-12-06 10:19:07.552354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:01.575 [2024-12-06 10:19:07.552360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:01.575 [2024-12-06 10:19:07.552366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:01.576 [2024-12-06 10:19:07.552372] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.576 [2024-12-06 10:19:07.552379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.576 [2024-12-06 10:19:07.552385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.576 [2024-12-06 10:19:07.552392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.576 [2024-12-06 10:19:07.552397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.576 [2024-12-06 10:19:07.552403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.576 [2024-12-06 10:19:07.552409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.576 [2024-12-06 10:19:07.552415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.576 [2024-12-06 10:19:07.552421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:01.576 [2024-12-06 10:19:07.552428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.576 [2024-12-06 10:19:07.552508] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:01.576 [2024-12-06 10:19:07.552520] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:04.118 [2024-12-06 10:19:09.697759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.697818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:04.118 [2024-12-06 10:19:09.697833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2145.241 ms 00:20:04.118 [2024-12-06 10:19:09.697844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.722916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.722964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.118 [2024-12-06 10:19:09.722976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.750 ms 00:20:04.118 [2024-12-06 10:19:09.722985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.723109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.723122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:04.118 [2024-12-06 10:19:09.723144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:04.118 [2024-12-06 10:19:09.723157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.767037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.767085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.118 [2024-12-06 10:19:09.767097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.851 ms 00:20:04.118 [2024-12-06 10:19:09.767108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.767181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.767194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:04.118 [2024-12-06 10:19:09.767203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:04.118 [2024-12-06 10:19:09.767212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.767553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.767578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:04.118 [2024-12-06 10:19:09.767587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:04.118 [2024-12-06 10:19:09.767596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.767705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.767715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:04.118 [2024-12-06 10:19:09.767735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:04.118 [2024-12-06 10:19:09.767746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.781808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.781842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:04.118 [2024-12-06 10:19:09.781852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.032 ms 00:20:04.118 [2024-12-06 10:19:09.781861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.792962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:04.118 [2024-12-06 10:19:09.806861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.806894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:04.118 [2024-12-06 10:19:09.806906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.916 ms 00:20:04.118 [2024-12-06 10:19:09.806914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.872900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.872943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:04.118 [2024-12-06 10:19:09.872957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.908 ms 00:20:04.118 [2024-12-06 10:19:09.872965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.873157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.873168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:04.118 [2024-12-06 10:19:09.873180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:20:04.118 [2024-12-06 10:19:09.873187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.895886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.895922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:04.118 [2024-12-06 10:19:09.895938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.666 ms 00:20:04.118 [2024-12-06 10:19:09.895946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.918152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.918185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:04.118 [2024-12-06 10:19:09.918198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.164 ms 00:20:04.118 [2024-12-06 10:19:09.918205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.918775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.918796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:04.118 [2024-12-06 10:19:09.918806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:20:04.118 [2024-12-06 10:19:09.918814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:09.989420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:09.989463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:04.118 [2024-12-06 10:19:09.989478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.576 ms 00:20:04.118 [2024-12-06 10:19:09.989486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.014229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:10.014271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:04.118 [2024-12-06 10:19:10.014286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.657 ms 00:20:04.118 [2024-12-06 10:19:10.014295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.038554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:10.038601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:04.118 [2024-12-06 10:19:10.038615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.179 ms 00:20:04.118 [2024-12-06 10:19:10.038624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.061550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:10.061603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:04.118 [2024-12-06 10:19:10.061617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.847 ms 00:20:04.118 [2024-12-06 10:19:10.061625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.061690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:10.061701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:04.118 [2024-12-06 10:19:10.061713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:04.118 [2024-12-06 10:19:10.061720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.061788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.118 [2024-12-06 10:19:10.061797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:04.118 [2024-12-06 10:19:10.061806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:04.118 [2024-12-06 10:19:10.061814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.118 [2024-12-06 10:19:10.062637] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.118 [2024-12-06 10:19:10.065864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2522.748 ms, result 0 00:20:04.118 [2024-12-06 10:19:10.066505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:04.118 { 00:20:04.118 "name": "ftl0", 00:20:04.118 "uuid": "5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d" 00:20:04.118 } 00:20:04.118 10:19:10 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:04.118 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:04.377 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:04.377 [ 00:20:04.377 { 00:20:04.377 "name": "ftl0", 00:20:04.377 "aliases": [ 00:20:04.378 "5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d" 00:20:04.378 ], 00:20:04.378 "product_name": "FTL disk", 00:20:04.378 "block_size": 4096, 00:20:04.378 "num_blocks": 23592960, 00:20:04.378 "uuid": "5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d", 00:20:04.378 "assigned_rate_limits": { 00:20:04.378 "rw_ios_per_sec": 0, 00:20:04.378 "rw_mbytes_per_sec": 0, 00:20:04.378 "r_mbytes_per_sec": 0, 00:20:04.378 "w_mbytes_per_sec": 0 00:20:04.378 }, 00:20:04.378 "claimed": false, 00:20:04.378 "zoned": false, 00:20:04.378 "supported_io_types": { 00:20:04.378 "read": true, 00:20:04.378 "write": true, 00:20:04.378 "unmap": true, 00:20:04.378 "flush": true, 00:20:04.378 "reset": false, 00:20:04.378 "nvme_admin": false, 00:20:04.378 "nvme_io": false, 00:20:04.378 "nvme_io_md": false, 00:20:04.378 "write_zeroes": true, 00:20:04.378 "zcopy": false, 00:20:04.378 "get_zone_info": false, 00:20:04.378 "zone_management": false, 00:20:04.378 "zone_append": false, 00:20:04.378 "compare": false, 00:20:04.378 "compare_and_write": false, 00:20:04.378 "abort": false, 00:20:04.378 "seek_hole": false, 00:20:04.378 "seek_data": false, 00:20:04.378 "copy": false, 00:20:04.378 "nvme_iov_md": false 00:20:04.378 }, 00:20:04.378 "driver_specific": { 00:20:04.378 "ftl": { 00:20:04.378 "base_bdev": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:04.378 "cache": "nvc0n1p0" 00:20:04.378 } 00:20:04.378 } 00:20:04.378 } 00:20:04.378 ] 00:20:04.378 10:19:10 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:04.378 10:19:10 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:04.378 10:19:10 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:04.636 10:19:10 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:04.636 10:19:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:04.895 10:19:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:04.895 { 00:20:04.895 "name": "ftl0", 00:20:04.895 "aliases": [ 00:20:04.895 "5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d" 00:20:04.895 ], 00:20:04.895 "product_name": "FTL disk", 00:20:04.895 "block_size": 4096, 00:20:04.895 "num_blocks": 23592960, 00:20:04.895 "uuid": "5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d", 00:20:04.895 "assigned_rate_limits": { 00:20:04.895 "rw_ios_per_sec": 0, 00:20:04.895 "rw_mbytes_per_sec": 0, 00:20:04.895 "r_mbytes_per_sec": 0, 00:20:04.895 "w_mbytes_per_sec": 0 00:20:04.895 }, 00:20:04.895 "claimed": false, 00:20:04.895 "zoned": false, 00:20:04.895 "supported_io_types": { 00:20:04.895 "read": true, 00:20:04.895 "write": true, 00:20:04.895 "unmap": true, 00:20:04.895 "flush": true, 00:20:04.895 "reset": false, 00:20:04.895 "nvme_admin": false, 00:20:04.895 "nvme_io": false, 00:20:04.895 "nvme_io_md": false, 00:20:04.895 "write_zeroes": true, 00:20:04.895 "zcopy": false, 00:20:04.895 "get_zone_info": false, 00:20:04.895 "zone_management": false, 00:20:04.895 "zone_append": false, 00:20:04.895 "compare": false, 00:20:04.895 "compare_and_write": false, 00:20:04.895 "abort": false, 00:20:04.895 "seek_hole": false, 00:20:04.895 "seek_data": false, 00:20:04.895 "copy": false, 00:20:04.895 "nvme_iov_md": false 00:20:04.895 }, 00:20:04.895 "driver_specific": { 00:20:04.895 "ftl": { 00:20:04.895 "base_bdev": "1e5c987b-c2f9-4198-9deb-a3aceeb03fb5", 00:20:04.895 "cache": "nvc0n1p0" 00:20:04.895 } 00:20:04.895 } 00:20:04.895 } 00:20:04.895 ]' 00:20:04.895 10:19:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:04.895 10:19:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:04.895 10:19:10 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:05.183 [2024-12-06 10:19:11.101642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.101686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:05.183 [2024-12-06 10:19:11.101700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:05.183 [2024-12-06 10:19:11.101710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.101747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:05.183 [2024-12-06 10:19:11.104337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.104367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:05.183 [2024-12-06 10:19:11.104383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:20:05.183 [2024-12-06 10:19:11.104391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.104860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.104880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:05.183 [2024-12-06 10:19:11.104890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:20:05.183 [2024-12-06 10:19:11.104898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.108540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.108561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:05.183 [2024-12-06 10:19:11.108571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:20:05.183 [2024-12-06 10:19:11.108579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.115706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.115737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:05.183 [2024-12-06 10:19:11.115749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.077 ms 00:20:05.183 [2024-12-06 10:19:11.115757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.139301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.139335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:05.183 [2024-12-06 10:19:11.139349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.464 ms 00:20:05.183 [2024-12-06 10:19:11.139356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.154944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.154980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:05.183 [2024-12-06 10:19:11.154995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.527 ms 00:20:05.183 [2024-12-06 10:19:11.155003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.155197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.155208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:05.183 [2024-12-06 10:19:11.155218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:20:05.183 [2024-12-06 10:19:11.155225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.177831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.177864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:05.183 [2024-12-06 10:19:11.177876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.582 ms 00:20:05.183 [2024-12-06 10:19:11.177883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.200531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.200563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:05.183 [2024-12-06 10:19:11.200577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.595 ms 00:20:05.183 [2024-12-06 10:19:11.200584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.183 [2024-12-06 10:19:11.222668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.183 [2024-12-06 10:19:11.222700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:05.183 [2024-12-06 10:19:11.222713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.028 ms 00:20:05.184 [2024-12-06 10:19:11.222719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.184 [2024-12-06 10:19:11.245063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.184 [2024-12-06 10:19:11.245096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:05.184 [2024-12-06 10:19:11.245108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.243 ms 00:20:05.184 [2024-12-06 10:19:11.245115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.184 [2024-12-06 10:19:11.245165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:05.184 [2024-12-06 10:19:11.245179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:05.184 [2024-12-06 10:19:11.245804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.245998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.246007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.246014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.246024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:05.185 [2024-12-06 10:19:11.246040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:05.185 [2024-12-06 10:19:11.246050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:05.185 [2024-12-06 10:19:11.246058] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:05.185 [2024-12-06 10:19:11.246066] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:05.185 [2024-12-06 10:19:11.246074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:05.185 [2024-12-06 10:19:11.246083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:05.185 [2024-12-06 10:19:11.246090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:05.185 [2024-12-06 10:19:11.246099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:05.185 [2024-12-06 10:19:11.246105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:05.185 [2024-12-06 10:19:11.246113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:05.185 [2024-12-06 10:19:11.246120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:05.185 [2024-12-06 10:19:11.246128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.185 [2024-12-06 10:19:11.246135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:05.185 [2024-12-06 10:19:11.246144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:20:05.185 [2024-12-06 10:19:11.246151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.258641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.185 [2024-12-06 10:19:11.258676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:05.185 [2024-12-06 10:19:11.258690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.449 ms 00:20:05.185 [2024-12-06 10:19:11.258698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.259064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.185 [2024-12-06 10:19:11.259085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:05.185 [2024-12-06 10:19:11.259094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:05.185 [2024-12-06 10:19:11.259101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.302351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.185 [2024-12-06 10:19:11.302387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.185 [2024-12-06 10:19:11.302399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.185 [2024-12-06 10:19:11.302407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.302499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.185 [2024-12-06 10:19:11.302509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.185 [2024-12-06 10:19:11.302518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.185 [2024-12-06 10:19:11.302525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.302583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.185 [2024-12-06 10:19:11.302594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.185 [2024-12-06 10:19:11.302604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.185 [2024-12-06 10:19:11.302611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.185 [2024-12-06 10:19:11.302638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.185 [2024-12-06 10:19:11.302645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.185 [2024-12-06 10:19:11.302654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.185 [2024-12-06 10:19:11.302661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.443 [2024-12-06 10:19:11.383201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.443 [2024-12-06 10:19:11.383240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.443 [2024-12-06 10:19:11.383252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.443 [2024-12-06 10:19:11.383260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.443 [2024-12-06 10:19:11.446117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.443 [2024-12-06 10:19:11.446157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.443 [2024-12-06 10:19:11.446169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.444 [2024-12-06 10:19:11.446275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.444 [2024-12-06 10:19:11.446348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.444 [2024-12-06 10:19:11.446490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:05.444 [2024-12-06 10:19:11.446566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.444 [2024-12-06 10:19:11.446637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:05.444 [2024-12-06 10:19:11.446702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.444 [2024-12-06 10:19:11.446711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:05.444 [2024-12-06 10:19:11.446718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.444 [2024-12-06 10:19:11.446894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 345.227 ms, result 0 00:20:05.444 true 00:20:05.444 10:19:11 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76495 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76495 ']' 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76495 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76495 00:20:05.444 killing process with pid 76495 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76495' 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76495 00:20:05.444 10:19:11 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76495 00:20:12.007 10:19:17 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:12.267 65536+0 records in 00:20:12.267 65536+0 records out 00:20:12.267 268435456 bytes (268 MB, 256 MiB) copied, 0.801394 s, 335 MB/s 00:20:12.267 10:19:18 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:12.267 [2024-12-06 10:19:18.332771] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:12.267 [2024-12-06 10:19:18.333057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76673 ] 00:20:12.528 [2024-12-06 10:19:18.489628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.528 [2024-12-06 10:19:18.573105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:12.790 [2024-12-06 10:19:18.783639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.790 [2024-12-06 10:19:18.783690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.790 [2024-12-06 10:19:18.931271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.790 [2024-12-06 10:19:18.931307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.790 [2024-12-06 10:19:18.931318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.790 [2024-12-06 10:19:18.931324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.790 [2024-12-06 10:19:18.933402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.790 [2024-12-06 10:19:18.933431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.790 [2024-12-06 10:19:18.933439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.067 ms 00:20:12.790 [2024-12-06 10:19:18.933454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.790 [2024-12-06 10:19:18.933512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.790 [2024-12-06 10:19:18.934058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.790 [2024-12-06 10:19:18.934074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.934081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.791 [2024-12-06 10:19:18.934088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:20:12.791 [2024-12-06 10:19:18.934094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.935050] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.791 [2024-12-06 10:19:18.944691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.944718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.791 [2024-12-06 10:19:18.944727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.641 ms 00:20:12.791 [2024-12-06 10:19:18.944733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.944798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.944807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.791 [2024-12-06 10:19:18.944813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:12.791 [2024-12-06 10:19:18.944819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.949283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.949307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.791 [2024-12-06 10:19:18.949314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.434 ms 00:20:12.791 [2024-12-06 10:19:18.949320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.949394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.949402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.791 [2024-12-06 10:19:18.949409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:12.791 [2024-12-06 10:19:18.949415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.949435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.949441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.791 [2024-12-06 10:19:18.949457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:12.791 [2024-12-06 10:19:18.949463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.949480] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.791 [2024-12-06 10:19:18.952150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.952172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.791 [2024-12-06 10:19:18.952179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.674 ms 00:20:12.791 [2024-12-06 10:19:18.952184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.952218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.952225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.791 [2024-12-06 10:19:18.952231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:12.791 [2024-12-06 10:19:18.952236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.952251] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.791 [2024-12-06 10:19:18.952267] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.791 [2024-12-06 10:19:18.952296] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.791 [2024-12-06 10:19:18.952310] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.791 [2024-12-06 10:19:18.952393] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.791 [2024-12-06 10:19:18.952403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.791 [2024-12-06 10:19:18.952412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.791 [2024-12-06 10:19:18.952424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952431] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952437] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.791 [2024-12-06 10:19:18.952443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.791 [2024-12-06 10:19:18.952458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.791 [2024-12-06 10:19:18.952464] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.791 [2024-12-06 10:19:18.952470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.952475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.791 [2024-12-06 10:19:18.952481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:20:12.791 [2024-12-06 10:19:18.952486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.952554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.791 [2024-12-06 10:19:18.952565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.791 [2024-12-06 10:19:18.952571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:12.791 [2024-12-06 10:19:18.952577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.791 [2024-12-06 10:19:18.952654] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.791 [2024-12-06 10:19:18.952665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.791 [2024-12-06 10:19:18.952671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.791 [2024-12-06 10:19:18.952688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.791 [2024-12-06 10:19:18.952704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.791 [2024-12-06 10:19:18.952714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.791 [2024-12-06 10:19:18.952724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.791 [2024-12-06 10:19:18.952729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.791 [2024-12-06 10:19:18.952734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.791 [2024-12-06 10:19:18.952739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.791 [2024-12-06 10:19:18.952745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.791 [2024-12-06 10:19:18.952757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.791 [2024-12-06 10:19:18.952773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.791 [2024-12-06 10:19:18.952783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.791 [2024-12-06 10:19:18.952788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.791 [2024-12-06 10:19:18.952793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.792 [2024-12-06 10:19:18.952798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.792 [2024-12-06 10:19:18.952803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.792 [2024-12-06 10:19:18.952813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.792 [2024-12-06 10:19:18.952818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.792 [2024-12-06 10:19:18.952829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.792 [2024-12-06 10:19:18.952834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.792 [2024-12-06 10:19:18.952844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.792 [2024-12-06 10:19:18.952849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.792 [2024-12-06 10:19:18.952854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.792 [2024-12-06 10:19:18.952859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.792 [2024-12-06 10:19:18.952864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.792 [2024-12-06 10:19:18.952869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.792 [2024-12-06 10:19:18.952879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.792 [2024-12-06 10:19:18.952883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952888] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.792 [2024-12-06 10:19:18.952893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.792 [2024-12-06 10:19:18.952900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.792 [2024-12-06 10:19:18.952906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.792 [2024-12-06 10:19:18.952913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.792 [2024-12-06 10:19:18.952919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.792 [2024-12-06 10:19:18.952924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.792 [2024-12-06 10:19:18.952929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.792 [2024-12-06 10:19:18.952933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.792 [2024-12-06 10:19:18.952938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.792 [2024-12-06 10:19:18.952945] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.792 [2024-12-06 10:19:18.952952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.952962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.792 [2024-12-06 10:19:18.952968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.792 [2024-12-06 10:19:18.952973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.792 [2024-12-06 10:19:18.952978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.792 [2024-12-06 10:19:18.952984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.792 [2024-12-06 10:19:18.952989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.792 [2024-12-06 10:19:18.952995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.792 [2024-12-06 10:19:18.953000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.792 [2024-12-06 10:19:18.953006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.792 [2024-12-06 10:19:18.953011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.792 [2024-12-06 10:19:18.953039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.792 [2024-12-06 10:19:18.953045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.792 [2024-12-06 10:19:18.953057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.792 [2024-12-06 10:19:18.953062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.792 [2024-12-06 10:19:18.953068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.792 [2024-12-06 10:19:18.953073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.792 [2024-12-06 10:19:18.953081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.792 [2024-12-06 10:19:18.953086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.471 ms 00:20:12.792 [2024-12-06 10:19:18.953091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:18.973973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:18.974001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.054 [2024-12-06 10:19:18.974009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.840 ms 00:20:13.054 [2024-12-06 10:19:18.974015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:18.974111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:18.974122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.054 [2024-12-06 10:19:18.974129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:13.054 [2024-12-06 10:19:18.974135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.010453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.010485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.054 [2024-12-06 10:19:19.010496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.303 ms 00:20:13.054 [2024-12-06 10:19:19.010502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.010562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.010570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.054 [2024-12-06 10:19:19.010577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.054 [2024-12-06 10:19:19.010583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.010871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.010890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.054 [2024-12-06 10:19:19.010897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:13.054 [2024-12-06 10:19:19.010905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.011010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.011021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.054 [2024-12-06 10:19:19.011027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:13.054 [2024-12-06 10:19:19.011033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.021823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.021850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.054 [2024-12-06 10:19:19.021857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.775 ms 00:20:13.054 [2024-12-06 10:19:19.021863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.031638] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:13.054 [2024-12-06 10:19:19.031669] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:13.054 [2024-12-06 10:19:19.031677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.031684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:13.054 [2024-12-06 10:19:19.031691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.743 ms 00:20:13.054 [2024-12-06 10:19:19.031696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.063897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.063941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:13.054 [2024-12-06 10:19:19.063953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.638 ms 00:20:13.054 [2024-12-06 10:19:19.063961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.075718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.054 [2024-12-06 10:19:19.075751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:13.054 [2024-12-06 10:19:19.075761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.678 ms 00:20:13.054 [2024-12-06 10:19:19.075769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.054 [2024-12-06 10:19:19.087441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.087480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:13.055 [2024-12-06 10:19:19.087490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.609 ms 00:20:13.055 [2024-12-06 10:19:19.087496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.088125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.088149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.055 [2024-12-06 10:19:19.088158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:13.055 [2024-12-06 10:19:19.088165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.145364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.145412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:13.055 [2024-12-06 10:19:19.145425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.177 ms 00:20:13.055 [2024-12-06 10:19:19.145433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.156113] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.055 [2024-12-06 10:19:19.171578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.171618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.055 [2024-12-06 10:19:19.171630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.036 ms 00:20:13.055 [2024-12-06 10:19:19.171638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.171721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.171732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:13.055 [2024-12-06 10:19:19.171740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:13.055 [2024-12-06 10:19:19.171748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.171796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.171806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.055 [2024-12-06 10:19:19.171814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:13.055 [2024-12-06 10:19:19.171822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.171856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.171867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.055 [2024-12-06 10:19:19.171875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:13.055 [2024-12-06 10:19:19.171883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.171914] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:13.055 [2024-12-06 10:19:19.171923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.171931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:13.055 [2024-12-06 10:19:19.171939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:13.055 [2024-12-06 10:19:19.171946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.196374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.196431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:13.055 [2024-12-06 10:19:19.196444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.404 ms 00:20:13.055 [2024-12-06 10:19:19.196461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.196559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.055 [2024-12-06 10:19:19.196571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:13.055 [2024-12-06 10:19:19.196580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:13.055 [2024-12-06 10:19:19.196588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.055 [2024-12-06 10:19:19.197503] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.055 [2024-12-06 10:19:19.200770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.906 ms, result 0 00:20:13.055 [2024-12-06 10:19:19.201970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:13.055 [2024-12-06 10:19:19.215173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.441  [2024-12-06T10:19:21.549Z] Copying: 16/256 [MB] (16 MBps) [2024-12-06T10:19:22.484Z] Copying: 51/256 [MB] (35 MBps) [2024-12-06T10:19:23.419Z] Copying: 86/256 [MB] (34 MBps) [2024-12-06T10:19:24.354Z] Copying: 106/256 [MB] (20 MBps) [2024-12-06T10:19:25.284Z] Copying: 132/256 [MB] (25 MBps) [2024-12-06T10:19:26.656Z] Copying: 157/256 [MB] (24 MBps) [2024-12-06T10:19:27.220Z] Copying: 183/256 [MB] (25 MBps) [2024-12-06T10:19:28.594Z] Copying: 206/256 [MB] (23 MBps) [2024-12-06T10:19:29.531Z] Copying: 231/256 [MB] (25 MBps) [2024-12-06T10:19:29.789Z] Copying: 248/256 [MB] (16 MBps) [2024-12-06T10:19:29.789Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-06 10:19:29.734456] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.622 [2024-12-06 10:19:29.743890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.622 [2024-12-06 10:19:29.743926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:23.622 [2024-12-06 10:19:29.743939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.622 [2024-12-06 10:19:29.743952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.622 [2024-12-06 10:19:29.743972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:23.622 [2024-12-06 10:19:29.746535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.622 [2024-12-06 10:19:29.746561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:23.622 [2024-12-06 10:19:29.746571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:20:23.622 [2024-12-06 10:19:29.746579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.622 [2024-12-06 10:19:29.749037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.622 [2024-12-06 10:19:29.749071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:23.622 [2024-12-06 10:19:29.749080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:20:23.622 [2024-12-06 10:19:29.749087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.622 [2024-12-06 10:19:29.757014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.622 [2024-12-06 10:19:29.757051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:23.622 [2024-12-06 10:19:29.757060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.910 ms 00:20:23.622 [2024-12-06 10:19:29.757068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.622 [2024-12-06 10:19:29.763990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.622 [2024-12-06 10:19:29.764031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:23.622 [2024-12-06 10:19:29.764040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:20:23.622 [2024-12-06 10:19:29.764047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.787437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.787475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:23.881 [2024-12-06 10:19:29.787485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.352 ms 00:20:23.881 [2024-12-06 10:19:29.787492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.801867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.801912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:23.881 [2024-12-06 10:19:29.801924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.343 ms 00:20:23.881 [2024-12-06 10:19:29.801931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.802061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.802071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:23.881 [2024-12-06 10:19:29.802080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:23.881 [2024-12-06 10:19:29.802093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.825500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.825531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:23.881 [2024-12-06 10:19:29.825540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.391 ms 00:20:23.881 [2024-12-06 10:19:29.825547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.848716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.848747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:23.881 [2024-12-06 10:19:29.848756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.137 ms 00:20:23.881 [2024-12-06 10:19:29.848762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.881 [2024-12-06 10:19:29.871104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.881 [2024-12-06 10:19:29.871134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:23.881 [2024-12-06 10:19:29.871144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.299 ms 00:20:23.881 [2024-12-06 10:19:29.871151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.882 [2024-12-06 10:19:29.893968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.882 [2024-12-06 10:19:29.894001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:23.882 [2024-12-06 10:19:29.894010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.762 ms 00:20:23.882 [2024-12-06 10:19:29.894017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.882 [2024-12-06 10:19:29.894047] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:23.882 [2024-12-06 10:19:29.894060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:23.882 [2024-12-06 10:19:29.894677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:23.883 [2024-12-06 10:19:29.894809] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:23.883 [2024-12-06 10:19:29.894817] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:23.883 [2024-12-06 10:19:29.894824] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:23.883 [2024-12-06 10:19:29.894831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:23.883 [2024-12-06 10:19:29.894837] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:23.883 [2024-12-06 10:19:29.894844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:23.883 [2024-12-06 10:19:29.894851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:23.883 [2024-12-06 10:19:29.894859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:23.883 [2024-12-06 10:19:29.894866] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:23.883 [2024-12-06 10:19:29.894872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:23.883 [2024-12-06 10:19:29.894878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:23.883 [2024-12-06 10:19:29.894884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.883 [2024-12-06 10:19:29.894893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:23.883 [2024-12-06 10:19:29.894902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:20:23.883 [2024-12-06 10:19:29.894908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.907368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.883 [2024-12-06 10:19:29.907397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:23.883 [2024-12-06 10:19:29.907407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.443 ms 00:20:23.883 [2024-12-06 10:19:29.907413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.907779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.883 [2024-12-06 10:19:29.907801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:23.883 [2024-12-06 10:19:29.907814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:23.883 [2024-12-06 10:19:29.907821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.942574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.883 [2024-12-06 10:19:29.942605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.883 [2024-12-06 10:19:29.942615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.883 [2024-12-06 10:19:29.942623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.942694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.883 [2024-12-06 10:19:29.942703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.883 [2024-12-06 10:19:29.942710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.883 [2024-12-06 10:19:29.942718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.942755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.883 [2024-12-06 10:19:29.942764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.883 [2024-12-06 10:19:29.942771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.883 [2024-12-06 10:19:29.942777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:29.942792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.883 [2024-12-06 10:19:29.942802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.883 [2024-12-06 10:19:29.942809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.883 [2024-12-06 10:19:29.942815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.883 [2024-12-06 10:19:30.020139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:23.883 [2024-12-06 10:19:30.020188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.883 [2024-12-06 10:19:30.020200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:23.883 [2024-12-06 10:19:30.020208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.141 [2024-12-06 10:19:30.084946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.141 [2024-12-06 10:19:30.085000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.141 [2024-12-06 10:19:30.085012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.141 [2024-12-06 10:19:30.085020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.142 [2024-12-06 10:19:30.085092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.142 [2024-12-06 10:19:30.085148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.142 [2024-12-06 10:19:30.085257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.142 [2024-12-06 10:19:30.085309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.142 [2024-12-06 10:19:30.085371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.142 [2024-12-06 10:19:30.085429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.142 [2024-12-06 10:19:30.085440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.142 [2024-12-06 10:19:30.085468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.142 [2024-12-06 10:19:30.085602] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.703 ms, result 0 00:20:25.076 00:20:25.077 00:20:25.077 10:19:30 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76803 00:20:25.077 10:19:30 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76803 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76803 ']' 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.077 10:19:30 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.077 10:19:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:25.077 [2024-12-06 10:19:31.012251] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:25.077 [2024-12-06 10:19:31.012368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76803 ] 00:20:25.077 [2024-12-06 10:19:31.168249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.339 [2024-12-06 10:19:31.263353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.928 10:19:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.928 10:19:31 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:25.928 10:19:31 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:25.928 [2024-12-06 10:19:32.051162] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:25.928 [2024-12-06 10:19:32.051221] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:26.187 [2024-12-06 10:19:32.225067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.225109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:26.187 [2024-12-06 10:19:32.225124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.187 [2024-12-06 10:19:32.225132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.187 [2024-12-06 10:19:32.227735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.227767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:26.187 [2024-12-06 10:19:32.227778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:20:26.187 [2024-12-06 10:19:32.227786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.187 [2024-12-06 10:19:32.227855] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:26.187 [2024-12-06 10:19:32.228531] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:26.187 [2024-12-06 10:19:32.228559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.228567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:26.187 [2024-12-06 10:19:32.228576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:20:26.187 [2024-12-06 10:19:32.228586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.187 [2024-12-06 10:19:32.229647] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:26.187 [2024-12-06 10:19:32.242089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.242125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:26.187 [2024-12-06 10:19:32.242136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.445 ms 00:20:26.187 [2024-12-06 10:19:32.242145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.187 [2024-12-06 10:19:32.242222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.242234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:26.187 [2024-12-06 10:19:32.242243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:26.187 [2024-12-06 10:19:32.242252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.187 [2024-12-06 10:19:32.246945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.187 [2024-12-06 10:19:32.246980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:26.187 [2024-12-06 10:19:32.246988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:20:26.187 [2024-12-06 10:19:32.246996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.247085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.247096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:26.188 [2024-12-06 10:19:32.247104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:26.188 [2024-12-06 10:19:32.247115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.247137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.247147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:26.188 [2024-12-06 10:19:32.247155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.188 [2024-12-06 10:19:32.247163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.247184] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:26.188 [2024-12-06 10:19:32.250366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.250392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:26.188 [2024-12-06 10:19:32.250402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.185 ms 00:20:26.188 [2024-12-06 10:19:32.250409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.250456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.250465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:26.188 [2024-12-06 10:19:32.250476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:26.188 [2024-12-06 10:19:32.250483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.250503] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:26.188 [2024-12-06 10:19:32.250520] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:26.188 [2024-12-06 10:19:32.250562] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:26.188 [2024-12-06 10:19:32.250576] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:26.188 [2024-12-06 10:19:32.250680] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:26.188 [2024-12-06 10:19:32.250691] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:26.188 [2024-12-06 10:19:32.250703] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:26.188 [2024-12-06 10:19:32.250713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:26.188 [2024-12-06 10:19:32.250723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:26.188 [2024-12-06 10:19:32.250731] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:26.188 [2024-12-06 10:19:32.250739] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:26.188 [2024-12-06 10:19:32.250746] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:26.188 [2024-12-06 10:19:32.250756] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:26.188 [2024-12-06 10:19:32.250764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.250772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:26.188 [2024-12-06 10:19:32.250780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:20:26.188 [2024-12-06 10:19:32.250790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.250875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.188 [2024-12-06 10:19:32.250884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:26.188 [2024-12-06 10:19:32.250892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:26.188 [2024-12-06 10:19:32.250900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.188 [2024-12-06 10:19:32.250997] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:26.188 [2024-12-06 10:19:32.251008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:26.188 [2024-12-06 10:19:32.251015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:26.188 [2024-12-06 10:19:32.251043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:26.188 [2024-12-06 10:19:32.251065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.188 [2024-12-06 10:19:32.251080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:26.188 [2024-12-06 10:19:32.251088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:26.188 [2024-12-06 10:19:32.251095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:26.188 [2024-12-06 10:19:32.251103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:26.188 [2024-12-06 10:19:32.251110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:26.188 [2024-12-06 10:19:32.251118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:26.188 [2024-12-06 10:19:32.251133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:26.188 [2024-12-06 10:19:32.251159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:26.188 [2024-12-06 10:19:32.251183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:26.188 [2024-12-06 10:19:32.251204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:26.188 [2024-12-06 10:19:32.251227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:26.188 [2024-12-06 10:19:32.251247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.188 [2024-12-06 10:19:32.251262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:26.188 [2024-12-06 10:19:32.251269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:26.188 [2024-12-06 10:19:32.251275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:26.188 [2024-12-06 10:19:32.251283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:26.188 [2024-12-06 10:19:32.251290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:26.188 [2024-12-06 10:19:32.251299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:26.188 [2024-12-06 10:19:32.251314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:26.188 [2024-12-06 10:19:32.251320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251328] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:26.188 [2024-12-06 10:19:32.251337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:26.188 [2024-12-06 10:19:32.251345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:26.188 [2024-12-06 10:19:32.251361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:26.188 [2024-12-06 10:19:32.251368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:26.188 [2024-12-06 10:19:32.251377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:26.188 [2024-12-06 10:19:32.251384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:26.188 [2024-12-06 10:19:32.251392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:26.188 [2024-12-06 10:19:32.251398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:26.188 [2024-12-06 10:19:32.251408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:26.189 [2024-12-06 10:19:32.251417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:26.189 [2024-12-06 10:19:32.251436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:26.189 [2024-12-06 10:19:32.251454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:26.189 [2024-12-06 10:19:32.251462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:26.189 [2024-12-06 10:19:32.251470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:26.189 [2024-12-06 10:19:32.251477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:26.189 [2024-12-06 10:19:32.251485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:26.189 [2024-12-06 10:19:32.251493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:26.189 [2024-12-06 10:19:32.251501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:26.189 [2024-12-06 10:19:32.251508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:26.189 [2024-12-06 10:19:32.251547] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:26.189 [2024-12-06 10:19:32.251555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:26.189 [2024-12-06 10:19:32.251572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:26.189 [2024-12-06 10:19:32.251581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:26.189 [2024-12-06 10:19:32.251588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:26.189 [2024-12-06 10:19:32.251597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.251608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:26.189 [2024-12-06 10:19:32.251616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:20:26.189 [2024-12-06 10:19:32.251625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.277045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.277076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:26.189 [2024-12-06 10:19:32.277090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.351 ms 00:20:26.189 [2024-12-06 10:19:32.277097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.277209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.277218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:26.189 [2024-12-06 10:19:32.277228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:26.189 [2024-12-06 10:19:32.277235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.307453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.307482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:26.189 [2024-12-06 10:19:32.307493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.189 ms 00:20:26.189 [2024-12-06 10:19:32.307500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.307552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.307561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.189 [2024-12-06 10:19:32.307571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:26.189 [2024-12-06 10:19:32.307578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.307879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.307899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.189 [2024-12-06 10:19:32.307912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:26.189 [2024-12-06 10:19:32.307919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.308050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.308059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.189 [2024-12-06 10:19:32.308068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:26.189 [2024-12-06 10:19:32.308076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.189 [2024-12-06 10:19:32.322114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.189 [2024-12-06 10:19:32.322145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.189 [2024-12-06 10:19:32.322156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:20:26.189 [2024-12-06 10:19:32.322164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.448 [2024-12-06 10:19:32.351888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:26.448 [2024-12-06 10:19:32.351926] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:26.448 [2024-12-06 10:19:32.351943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.448 [2024-12-06 10:19:32.351951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:26.448 [2024-12-06 10:19:32.351962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.673 ms 00:20:26.448 [2024-12-06 10:19:32.351974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.448 [2024-12-06 10:19:32.375968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.448 [2024-12-06 10:19:32.376003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:26.448 [2024-12-06 10:19:32.376022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.915 ms 00:20:26.448 [2024-12-06 10:19:32.376030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.448 [2024-12-06 10:19:32.387700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.448 [2024-12-06 10:19:32.387729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:26.448 [2024-12-06 10:19:32.387742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.601 ms 00:20:26.448 [2024-12-06 10:19:32.387749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.448 [2024-12-06 10:19:32.399068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.448 [2024-12-06 10:19:32.399098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:26.449 [2024-12-06 10:19:32.399109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.256 ms 00:20:26.449 [2024-12-06 10:19:32.399117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.400516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.400550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:26.449 [2024-12-06 10:19:32.400563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:26.449 [2024-12-06 10:19:32.400579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.454757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.454800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:26.449 [2024-12-06 10:19:32.454814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.152 ms 00:20:26.449 [2024-12-06 10:19:32.454822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.465223] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:26.449 [2024-12-06 10:19:32.478791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.478830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:26.449 [2024-12-06 10:19:32.478841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.882 ms 00:20:26.449 [2024-12-06 10:19:32.478850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.478918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.478929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:26.449 [2024-12-06 10:19:32.478938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:26.449 [2024-12-06 10:19:32.478947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.478993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.479003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:26.449 [2024-12-06 10:19:32.479013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:26.449 [2024-12-06 10:19:32.479021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.479044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.479053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:26.449 [2024-12-06 10:19:32.479061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:26.449 [2024-12-06 10:19:32.479071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.479102] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:26.449 [2024-12-06 10:19:32.479116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.479123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:26.449 [2024-12-06 10:19:32.479132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:26.449 [2024-12-06 10:19:32.479141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.502383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.502414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:26.449 [2024-12-06 10:19:32.502427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.219 ms 00:20:26.449 [2024-12-06 10:19:32.502434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.502584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.449 [2024-12-06 10:19:32.502607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:26.449 [2024-12-06 10:19:32.502623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:26.449 [2024-12-06 10:19:32.502634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.449 [2024-12-06 10:19:32.503359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:26.449 [2024-12-06 10:19:32.506409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.028 ms, result 0 00:20:26.449 [2024-12-06 10:19:32.508240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.449 Some configs were skipped because the RPC state that can call them passed over. 00:20:26.449 10:19:32 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:26.707 [2024-12-06 10:19:32.727676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.707 [2024-12-06 10:19:32.727726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:26.707 [2024-12-06 10:19:32.727736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:20:26.707 [2024-12-06 10:19:32.727746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.708 [2024-12-06 10:19:32.727780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.483 ms, result 0 00:20:26.708 true 00:20:26.708 10:19:32 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:26.966 [2024-12-06 10:19:32.928200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.966 [2024-12-06 10:19:32.928242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:26.966 [2024-12-06 10:19:32.928256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.647 ms 00:20:26.966 [2024-12-06 10:19:32.928265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.966 [2024-12-06 10:19:32.928300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.748 ms, result 0 00:20:26.966 true 00:20:26.966 10:19:32 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76803 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76803 ']' 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76803 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76803 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.966 killing process with pid 76803 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76803' 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76803 00:20:26.966 10:19:32 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76803 00:20:27.533 [2024-12-06 10:19:33.636780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.636828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.533 [2024-12-06 10:19:33.636840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:27.533 [2024-12-06 10:19:33.636852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.636874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:27.533 [2024-12-06 10:19:33.639460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.639485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.533 [2024-12-06 10:19:33.639500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:20:27.533 [2024-12-06 10:19:33.639508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.639810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.639824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.533 [2024-12-06 10:19:33.639834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:27.533 [2024-12-06 10:19:33.639841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.644418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.644457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.533 [2024-12-06 10:19:33.644468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:20:27.533 [2024-12-06 10:19:33.644475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.651460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.651499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:27.533 [2024-12-06 10:19:33.651512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.903 ms 00:20:27.533 [2024-12-06 10:19:33.651519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.661791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.661823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:27.533 [2024-12-06 10:19:33.661836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.222 ms 00:20:27.533 [2024-12-06 10:19:33.661844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.669006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.669037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:27.533 [2024-12-06 10:19:33.669049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.126 ms 00:20:27.533 [2024-12-06 10:19:33.669056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.669522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.669557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:27.533 [2024-12-06 10:19:33.669570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:27.533 [2024-12-06 10:19:33.669578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.680125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.680155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:27.533 [2024-12-06 10:19:33.680168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.526 ms 00:20:27.533 [2024-12-06 10:19:33.680176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.533 [2024-12-06 10:19:33.690566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.533 [2024-12-06 10:19:33.690602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:27.533 [2024-12-06 10:19:33.690620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.066 ms 00:20:27.533 [2024-12-06 10:19:33.690627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.793 [2024-12-06 10:19:33.700426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.793 [2024-12-06 10:19:33.700466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.793 [2024-12-06 10:19:33.700478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.714 ms 00:20:27.793 [2024-12-06 10:19:33.700486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.793 [2024-12-06 10:19:33.709943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.793 [2024-12-06 10:19:33.709970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.793 [2024-12-06 10:19:33.709981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.394 ms 00:20:27.793 [2024-12-06 10:19:33.709987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.793 [2024-12-06 10:19:33.710020] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.793 [2024-12-06 10:19:33.710033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.793 [2024-12-06 10:19:33.710329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.794 [2024-12-06 10:19:33.710922] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.794 [2024-12-06 10:19:33.710934] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:27.794 [2024-12-06 10:19:33.710942] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.794 [2024-12-06 10:19:33.710954] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.794 [2024-12-06 10:19:33.710961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.794 [2024-12-06 10:19:33.710970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.794 [2024-12-06 10:19:33.710977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.794 [2024-12-06 10:19:33.710986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.794 [2024-12-06 10:19:33.710994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.794 [2024-12-06 10:19:33.711003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.794 [2024-12-06 10:19:33.711013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.794 [2024-12-06 10:19:33.711021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.794 [2024-12-06 10:19:33.711029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.794 [2024-12-06 10:19:33.711038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.002 ms 00:20:27.794 [2024-12-06 10:19:33.711050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.794 [2024-12-06 10:19:33.723261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.794 [2024-12-06 10:19:33.723290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.794 [2024-12-06 10:19:33.723302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:20:27.794 [2024-12-06 10:19:33.723310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.794 [2024-12-06 10:19:33.723682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.794 [2024-12-06 10:19:33.723700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.794 [2024-12-06 10:19:33.723712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:27.794 [2024-12-06 10:19:33.723720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.794 [2024-12-06 10:19:33.767634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.794 [2024-12-06 10:19:33.767667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.794 [2024-12-06 10:19:33.767679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.794 [2024-12-06 10:19:33.767688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.794 [2024-12-06 10:19:33.767787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.794 [2024-12-06 10:19:33.767798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.795 [2024-12-06 10:19:33.767812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.767820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.767866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.767876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.795 [2024-12-06 10:19:33.767887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.767895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.767915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.767924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.795 [2024-12-06 10:19:33.767933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.767943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.844485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.844522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.795 [2024-12-06 10:19:33.844534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.844541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.795 [2024-12-06 10:19:33.908466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.795 [2024-12-06 10:19:33.908566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.795 [2024-12-06 10:19:33.908619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.795 [2024-12-06 10:19:33.908732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.795 [2024-12-06 10:19:33.908787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.795 [2024-12-06 10:19:33.908852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.908902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.795 [2024-12-06 10:19:33.908911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.795 [2024-12-06 10:19:33.908921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.795 [2024-12-06 10:19:33.908928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.795 [2024-12-06 10:19:33.909054] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.253 ms, result 0 00:20:28.731 10:19:34 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:28.731 10:19:34 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:28.731 [2024-12-06 10:19:34.645151] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:28.731 [2024-12-06 10:19:34.645435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76858 ] 00:20:28.732 [2024-12-06 10:19:34.805905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.990 [2024-12-06 10:19:34.901116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.249 [2024-12-06 10:19:35.157021] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.249 [2024-12-06 10:19:35.157085] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.249 [2024-12-06 10:19:35.315095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.315139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.249 [2024-12-06 10:19:35.315156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:29.249 [2024-12-06 10:19:35.315164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.317819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.317853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.249 [2024-12-06 10:19:35.317863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.637 ms 00:20:29.249 [2024-12-06 10:19:35.317871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.317938] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.249 [2024-12-06 10:19:35.318646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.249 [2024-12-06 10:19:35.318671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.318679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.249 [2024-12-06 10:19:35.318687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:20:29.249 [2024-12-06 10:19:35.318695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.319905] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.249 [2024-12-06 10:19:35.332511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.332544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.249 [2024-12-06 10:19:35.332555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.609 ms 00:20:29.249 [2024-12-06 10:19:35.332563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.332646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.332658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.249 [2024-12-06 10:19:35.332666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:29.249 [2024-12-06 10:19:35.332674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.337391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.337421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.249 [2024-12-06 10:19:35.337430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:20:29.249 [2024-12-06 10:19:35.337438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.337528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.337538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.249 [2024-12-06 10:19:35.337546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:29.249 [2024-12-06 10:19:35.337554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.337580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.337588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.249 [2024-12-06 10:19:35.337596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.249 [2024-12-06 10:19:35.337603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.337622] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:29.249 [2024-12-06 10:19:35.340883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.340912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.249 [2024-12-06 10:19:35.340920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.266 ms 00:20:29.249 [2024-12-06 10:19:35.340927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.340962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.340971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.249 [2024-12-06 10:19:35.340978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:29.249 [2024-12-06 10:19:35.340986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.341005] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.249 [2024-12-06 10:19:35.341023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.249 [2024-12-06 10:19:35.341056] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.249 [2024-12-06 10:19:35.341070] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.249 [2024-12-06 10:19:35.341171] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.249 [2024-12-06 10:19:35.341181] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.249 [2024-12-06 10:19:35.341191] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.249 [2024-12-06 10:19:35.341202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:29.249 [2024-12-06 10:19:35.341226] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.249 [2024-12-06 10:19:35.341232] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.249 [2024-12-06 10:19:35.341239] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.249 [2024-12-06 10:19:35.341247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.341254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.249 [2024-12-06 10:19:35.341261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:29.249 [2024-12-06 10:19:35.341268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.341354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.249 [2024-12-06 10:19:35.341364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.249 [2024-12-06 10:19:35.341371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:29.249 [2024-12-06 10:19:35.341378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.249 [2024-12-06 10:19:35.341488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.249 [2024-12-06 10:19:35.341498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.249 [2024-12-06 10:19:35.341506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.249 [2024-12-06 10:19:35.341528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.249 [2024-12-06 10:19:35.341547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.249 [2024-12-06 10:19:35.341561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.249 [2024-12-06 10:19:35.341572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:29.249 [2024-12-06 10:19:35.341579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.249 [2024-12-06 10:19:35.341588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.249 [2024-12-06 10:19:35.341594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:29.249 [2024-12-06 10:19:35.341601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.249 [2024-12-06 10:19:35.341615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.249 [2024-12-06 10:19:35.341634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.249 [2024-12-06 10:19:35.341653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.249 [2024-12-06 10:19:35.341673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.249 [2024-12-06 10:19:35.341691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.249 [2024-12-06 10:19:35.341710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.249 [2024-12-06 10:19:35.341723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.249 [2024-12-06 10:19:35.341729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:29.249 [2024-12-06 10:19:35.341736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.249 [2024-12-06 10:19:35.341742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.249 [2024-12-06 10:19:35.341749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:29.249 [2024-12-06 10:19:35.341755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.249 [2024-12-06 10:19:35.341768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:29.249 [2024-12-06 10:19:35.341774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341780] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.249 [2024-12-06 10:19:35.341787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.249 [2024-12-06 10:19:35.341797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.249 [2024-12-06 10:19:35.341804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.249 [2024-12-06 10:19:35.341812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.250 [2024-12-06 10:19:35.341818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.250 [2024-12-06 10:19:35.341825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.250 [2024-12-06 10:19:35.341831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.250 [2024-12-06 10:19:35.341837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.250 [2024-12-06 10:19:35.341843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.250 [2024-12-06 10:19:35.341851] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.250 [2024-12-06 10:19:35.341860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:29.250 [2024-12-06 10:19:35.341875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:29.250 [2024-12-06 10:19:35.341881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:29.250 [2024-12-06 10:19:35.341888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:29.250 [2024-12-06 10:19:35.341895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:29.250 [2024-12-06 10:19:35.341902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:29.250 [2024-12-06 10:19:35.341909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:29.250 [2024-12-06 10:19:35.341915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:29.250 [2024-12-06 10:19:35.341922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:29.250 [2024-12-06 10:19:35.341928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:29.250 [2024-12-06 10:19:35.341963] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.250 [2024-12-06 10:19:35.341971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.250 [2024-12-06 10:19:35.341985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.250 [2024-12-06 10:19:35.341992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.250 [2024-12-06 10:19:35.341999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.250 [2024-12-06 10:19:35.342007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.342016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.250 [2024-12-06 10:19:35.342023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:20:29.250 [2024-12-06 10:19:35.342030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.367498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.367530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.250 [2024-12-06 10:19:35.367540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.403 ms 00:20:29.250 [2024-12-06 10:19:35.367547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.367662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.367672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.250 [2024-12-06 10:19:35.367680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:29.250 [2024-12-06 10:19:35.367687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.411704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.411740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.250 [2024-12-06 10:19:35.411754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.996 ms 00:20:29.250 [2024-12-06 10:19:35.411761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.411845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.411857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.250 [2024-12-06 10:19:35.411865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:29.250 [2024-12-06 10:19:35.411872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.412209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.412224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.250 [2024-12-06 10:19:35.412238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:29.250 [2024-12-06 10:19:35.412245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.250 [2024-12-06 10:19:35.412368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.250 [2024-12-06 10:19:35.412377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.250 [2024-12-06 10:19:35.412384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:29.250 [2024-12-06 10:19:35.412391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.425500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.425530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.509 [2024-12-06 10:19:35.425540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.089 ms 00:20:29.509 [2024-12-06 10:19:35.425548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.438141] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:29.509 [2024-12-06 10:19:35.438174] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.509 [2024-12-06 10:19:35.438185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.438193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.509 [2024-12-06 10:19:35.438201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.550 ms 00:20:29.509 [2024-12-06 10:19:35.438208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.462084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.462119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.509 [2024-12-06 10:19:35.462129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.810 ms 00:20:29.509 [2024-12-06 10:19:35.462136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.473851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.473880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.509 [2024-12-06 10:19:35.473890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.651 ms 00:20:29.509 [2024-12-06 10:19:35.473896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.485341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.485372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.509 [2024-12-06 10:19:35.485382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.387 ms 00:20:29.509 [2024-12-06 10:19:35.485388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.485997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.486015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.509 [2024-12-06 10:19:35.486024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:20:29.509 [2024-12-06 10:19:35.486031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.540318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.540361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.509 [2024-12-06 10:19:35.540374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.265 ms 00:20:29.509 [2024-12-06 10:19:35.540382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.550697] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.509 [2024-12-06 10:19:35.564138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.564172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.509 [2024-12-06 10:19:35.564182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.644 ms 00:20:29.509 [2024-12-06 10:19:35.564193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.564264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.564275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.509 [2024-12-06 10:19:35.564283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:29.509 [2024-12-06 10:19:35.564291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.564335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.564344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.509 [2024-12-06 10:19:35.564352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:29.509 [2024-12-06 10:19:35.564362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.564391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.564400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.509 [2024-12-06 10:19:35.564407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:29.509 [2024-12-06 10:19:35.564414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.564443] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.509 [2024-12-06 10:19:35.564469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.564476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.509 [2024-12-06 10:19:35.564484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:29.509 [2024-12-06 10:19:35.564491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.587955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.587988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.509 [2024-12-06 10:19:35.587999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.442 ms 00:20:29.509 [2024-12-06 10:19:35.588007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.588093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.509 [2024-12-06 10:19:35.588104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.509 [2024-12-06 10:19:35.588112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:29.509 [2024-12-06 10:19:35.588120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.509 [2024-12-06 10:19:35.589125] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.509 [2024-12-06 10:19:35.591950] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 273.763 ms, result 0 00:20:29.509 [2024-12-06 10:19:35.593230] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.509 [2024-12-06 10:19:35.606142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.883  [2024-12-06T10:19:37.617Z] Copying: 15/256 [MB] (15 MBps) [2024-12-06T10:19:38.993Z] Copying: 46/256 [MB] (30 MBps) [2024-12-06T10:19:39.927Z] Copying: 59/256 [MB] (13 MBps) [2024-12-06T10:19:40.861Z] Copying: 78/256 [MB] (18 MBps) [2024-12-06T10:19:41.804Z] Copying: 96/256 [MB] (17 MBps) [2024-12-06T10:19:42.738Z] Copying: 113/256 [MB] (17 MBps) [2024-12-06T10:19:43.673Z] Copying: 134/256 [MB] (21 MBps) [2024-12-06T10:19:45.050Z] Copying: 153/256 [MB] (18 MBps) [2024-12-06T10:19:45.627Z] Copying: 174/256 [MB] (21 MBps) [2024-12-06T10:19:46.998Z] Copying: 198/256 [MB] (23 MBps) [2024-12-06T10:19:47.932Z] Copying: 216/256 [MB] (18 MBps) [2024-12-06T10:19:48.499Z] Copying: 236/256 [MB] (20 MBps) [2024-12-06T10:19:48.499Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-06 10:19:48.466796] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:42.332 [2024-12-06 10:19:48.475993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.332 [2024-12-06 10:19:48.476036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:42.332 [2024-12-06 10:19:48.476053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:42.332 [2024-12-06 10:19:48.476061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.332 [2024-12-06 10:19:48.476081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:42.332 [2024-12-06 10:19:48.478640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.332 [2024-12-06 10:19:48.478664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:42.332 [2024-12-06 10:19:48.478675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.546 ms 00:20:42.332 [2024-12-06 10:19:48.478682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.332 [2024-12-06 10:19:48.478933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.332 [2024-12-06 10:19:48.478942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:42.332 [2024-12-06 10:19:48.478950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:20:42.332 [2024-12-06 10:19:48.478958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.332 [2024-12-06 10:19:48.482649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.332 [2024-12-06 10:19:48.482671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:42.332 [2024-12-06 10:19:48.482680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:20:42.332 [2024-12-06 10:19:48.482688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.332 [2024-12-06 10:19:48.489606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.332 [2024-12-06 10:19:48.489634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:42.332 [2024-12-06 10:19:48.489643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:20:42.332 [2024-12-06 10:19:48.489649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.512841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.512874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:42.591 [2024-12-06 10:19:48.512884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.143 ms 00:20:42.591 [2024-12-06 10:19:48.512892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.526800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.526834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:42.591 [2024-12-06 10:19:48.526847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.875 ms 00:20:42.591 [2024-12-06 10:19:48.526855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.526986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.526997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:42.591 [2024-12-06 10:19:48.527012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:42.591 [2024-12-06 10:19:48.527019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.550529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.550561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:42.591 [2024-12-06 10:19:48.550571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.495 ms 00:20:42.591 [2024-12-06 10:19:48.550578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.573827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.573857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:42.591 [2024-12-06 10:19:48.573867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.218 ms 00:20:42.591 [2024-12-06 10:19:48.573873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.596627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.596659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:42.591 [2024-12-06 10:19:48.596668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.722 ms 00:20:42.591 [2024-12-06 10:19:48.596674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.619484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.591 [2024-12-06 10:19:48.619515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:42.591 [2024-12-06 10:19:48.619524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.754 ms 00:20:42.591 [2024-12-06 10:19:48.619530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.591 [2024-12-06 10:19:48.619562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:42.591 [2024-12-06 10:19:48.619575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:42.591 [2024-12-06 10:19:48.619913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.619998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:42.592 [2024-12-06 10:19:48.620309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:42.592 [2024-12-06 10:19:48.620316] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:42.592 [2024-12-06 10:19:48.620324] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:42.592 [2024-12-06 10:19:48.620331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:42.592 [2024-12-06 10:19:48.620343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:42.592 [2024-12-06 10:19:48.620350] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:42.592 [2024-12-06 10:19:48.620357] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:42.592 [2024-12-06 10:19:48.620365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:42.592 [2024-12-06 10:19:48.620374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:42.592 [2024-12-06 10:19:48.620380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:42.592 [2024-12-06 10:19:48.620386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:42.592 [2024-12-06 10:19:48.620393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 10:19:48.620399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:42.592 [2024-12-06 10:19:48.620407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:20:42.592 [2024-12-06 10:19:48.620414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.632728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 10:19:48.632758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:42.592 [2024-12-06 10:19:48.632767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.285 ms 00:20:42.592 [2024-12-06 10:19:48.632774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.633123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:42.592 [2024-12-06 10:19:48.633132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:42.592 [2024-12-06 10:19:48.633140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:20:42.592 [2024-12-06 10:19:48.633147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.667988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.592 [2024-12-06 10:19:48.668028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:42.592 [2024-12-06 10:19:48.668037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.592 [2024-12-06 10:19:48.668048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.668123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.592 [2024-12-06 10:19:48.668133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:42.592 [2024-12-06 10:19:48.668140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.592 [2024-12-06 10:19:48.668147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.668184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.592 [2024-12-06 10:19:48.668192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:42.592 [2024-12-06 10:19:48.668200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.592 [2024-12-06 10:19:48.668206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.668225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.592 [2024-12-06 10:19:48.668232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:42.592 [2024-12-06 10:19:48.668239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.592 [2024-12-06 10:19:48.668245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.592 [2024-12-06 10:19:48.744018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.592 [2024-12-06 10:19:48.744058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:42.592 [2024-12-06 10:19:48.744069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.592 [2024-12-06 10:19:48.744076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:42.849 [2024-12-06 10:19:48.806330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:42.849 [2024-12-06 10:19:48.806410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:42.849 [2024-12-06 10:19:48.806482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:42.849 [2024-12-06 10:19:48.806593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:42.849 [2024-12-06 10:19:48.806648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:42.849 [2024-12-06 10:19:48.806705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:42.849 [2024-12-06 10:19:48.806765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:42.849 [2024-12-06 10:19:48.806773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:42.849 [2024-12-06 10:19:48.806780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:42.849 [2024-12-06 10:19:48.806907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.905 ms, result 0 00:20:43.413 00:20:43.413 00:20:43.413 10:19:49 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:43.413 10:19:49 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:43.976 10:19:50 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:43.976 [2024-12-06 10:19:50.113547] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:43.976 [2024-12-06 10:19:50.113668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77023 ] 00:20:44.233 [2024-12-06 10:19:50.274055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.233 [2024-12-06 10:19:50.367826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.490 [2024-12-06 10:19:50.624706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.490 [2024-12-06 10:19:50.624764] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.749 [2024-12-06 10:19:50.782552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.782592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:44.749 [2024-12-06 10:19:50.782605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.749 [2024-12-06 10:19:50.782613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.789210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.789310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.749 [2024-12-06 10:19:50.789343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:20:44.749 [2024-12-06 10:19:50.789366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.789734] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:44.749 [2024-12-06 10:19:50.792079] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:44.749 [2024-12-06 10:19:50.792145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.792169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.749 [2024-12-06 10:19:50.792194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:20:44.749 [2024-12-06 10:19:50.792215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.794328] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:44.749 [2024-12-06 10:19:50.807974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.808005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:44.749 [2024-12-06 10:19:50.808030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.653 ms 00:20:44.749 [2024-12-06 10:19:50.808038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.808124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.808135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:44.749 [2024-12-06 10:19:50.808145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:44.749 [2024-12-06 10:19:50.808152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.813072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.813097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.749 [2024-12-06 10:19:50.813106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.880 ms 00:20:44.749 [2024-12-06 10:19:50.813113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.813200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.813209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.749 [2024-12-06 10:19:50.813217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:44.749 [2024-12-06 10:19:50.813224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.813249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.813258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:44.749 [2024-12-06 10:19:50.813265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:44.749 [2024-12-06 10:19:50.813272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.813290] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:44.749 [2024-12-06 10:19:50.816511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.816534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.749 [2024-12-06 10:19:50.816542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:20:44.749 [2024-12-06 10:19:50.816549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.816585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.816594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:44.749 [2024-12-06 10:19:50.816602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:44.749 [2024-12-06 10:19:50.816609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.816628] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:44.749 [2024-12-06 10:19:50.816645] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:44.749 [2024-12-06 10:19:50.816679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:44.749 [2024-12-06 10:19:50.816693] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:44.749 [2024-12-06 10:19:50.816795] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:44.749 [2024-12-06 10:19:50.816804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:44.749 [2024-12-06 10:19:50.816814] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:44.749 [2024-12-06 10:19:50.816826] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:44.749 [2024-12-06 10:19:50.816835] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:44.749 [2024-12-06 10:19:50.816843] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:44.749 [2024-12-06 10:19:50.816850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:44.749 [2024-12-06 10:19:50.816856] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:44.749 [2024-12-06 10:19:50.816863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:44.749 [2024-12-06 10:19:50.816870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.816877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:44.749 [2024-12-06 10:19:50.816885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:44.749 [2024-12-06 10:19:50.816891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.816978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.749 [2024-12-06 10:19:50.816989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:44.749 [2024-12-06 10:19:50.816996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:44.749 [2024-12-06 10:19:50.817003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.749 [2024-12-06 10:19:50.817102] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:44.749 [2024-12-06 10:19:50.817112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:44.749 [2024-12-06 10:19:50.817119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:44.749 [2024-12-06 10:19:50.817141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:44.749 [2024-12-06 10:19:50.817162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.749 [2024-12-06 10:19:50.817175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:44.749 [2024-12-06 10:19:50.817190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:44.749 [2024-12-06 10:19:50.817196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.749 [2024-12-06 10:19:50.817203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:44.749 [2024-12-06 10:19:50.817209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:44.749 [2024-12-06 10:19:50.817216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:44.749 [2024-12-06 10:19:50.817230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:44.749 [2024-12-06 10:19:50.817249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:44.749 [2024-12-06 10:19:50.817268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:44.749 [2024-12-06 10:19:50.817288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:44.749 [2024-12-06 10:19:50.817307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.749 [2024-12-06 10:19:50.817319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:44.749 [2024-12-06 10:19:50.817325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:44.749 [2024-12-06 10:19:50.817331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.749 [2024-12-06 10:19:50.817337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:44.750 [2024-12-06 10:19:50.817343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:44.750 [2024-12-06 10:19:50.817349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.750 [2024-12-06 10:19:50.817356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:44.750 [2024-12-06 10:19:50.817362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:44.750 [2024-12-06 10:19:50.817369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.750 [2024-12-06 10:19:50.817375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:44.750 [2024-12-06 10:19:50.817381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:44.750 [2024-12-06 10:19:50.817387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.750 [2024-12-06 10:19:50.817394] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:44.750 [2024-12-06 10:19:50.817401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:44.750 [2024-12-06 10:19:50.817411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.750 [2024-12-06 10:19:50.817418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.750 [2024-12-06 10:19:50.817425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:44.750 [2024-12-06 10:19:50.817432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:44.750 [2024-12-06 10:19:50.817439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:44.750 [2024-12-06 10:19:50.817463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:44.750 [2024-12-06 10:19:50.817471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:44.750 [2024-12-06 10:19:50.817478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:44.750 [2024-12-06 10:19:50.817486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:44.750 [2024-12-06 10:19:50.817495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:44.750 [2024-12-06 10:19:50.817510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:44.750 [2024-12-06 10:19:50.817517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:44.750 [2024-12-06 10:19:50.817525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:44.750 [2024-12-06 10:19:50.817532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:44.750 [2024-12-06 10:19:50.817539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:44.750 [2024-12-06 10:19:50.817546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:44.750 [2024-12-06 10:19:50.817553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:44.750 [2024-12-06 10:19:50.817560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:44.750 [2024-12-06 10:19:50.817567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:44.750 [2024-12-06 10:19:50.817602] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:44.750 [2024-12-06 10:19:50.817610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:44.750 [2024-12-06 10:19:50.817626] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:44.750 [2024-12-06 10:19:50.817633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:44.750 [2024-12-06 10:19:50.817640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:44.750 [2024-12-06 10:19:50.817647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.817657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:44.750 [2024-12-06 10:19:50.817664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:20:44.750 [2024-12-06 10:19:50.817670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.843475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.843501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.750 [2024-12-06 10:19:50.843511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.739 ms 00:20:44.750 [2024-12-06 10:19:50.843519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.843636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.843645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.750 [2024-12-06 10:19:50.843653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:44.750 [2024-12-06 10:19:50.843660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.884105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.884139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.750 [2024-12-06 10:19:50.884153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.425 ms 00:20:44.750 [2024-12-06 10:19:50.884161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.884246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.884258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.750 [2024-12-06 10:19:50.884267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.750 [2024-12-06 10:19:50.884274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.884606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.884626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.750 [2024-12-06 10:19:50.884642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:44.750 [2024-12-06 10:19:50.884649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.884782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.884791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.750 [2024-12-06 10:19:50.884798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:44.750 [2024-12-06 10:19:50.884805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.898179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.898207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.750 [2024-12-06 10:19:50.898216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:20:44.750 [2024-12-06 10:19:50.898224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.750 [2024-12-06 10:19:50.911079] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:44.750 [2024-12-06 10:19:50.911109] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:44.750 [2024-12-06 10:19:50.911120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.750 [2024-12-06 10:19:50.911127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:44.750 [2024-12-06 10:19:50.911135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:20:44.750 [2024-12-06 10:19:50.911142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:50.935480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:50.935510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:45.008 [2024-12-06 10:19:50.935520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:20:45.008 [2024-12-06 10:19:50.935527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:50.947232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:50.947260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:45.008 [2024-12-06 10:19:50.947269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.635 ms 00:20:45.008 [2024-12-06 10:19:50.947276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:50.958624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:50.958651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:45.008 [2024-12-06 10:19:50.958660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.286 ms 00:20:45.008 [2024-12-06 10:19:50.958667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:50.959273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:50.959294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:45.008 [2024-12-06 10:19:50.959303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:20:45.008 [2024-12-06 10:19:50.959311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.015167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.015208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:45.008 [2024-12-06 10:19:51.015221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.833 ms 00:20:45.008 [2024-12-06 10:19:51.015229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.025386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:45.008 [2024-12-06 10:19:51.039253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.039286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:45.008 [2024-12-06 10:19:51.039298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.934 ms 00:20:45.008 [2024-12-06 10:19:51.039309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.039379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.039389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:45.008 [2024-12-06 10:19:51.039397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:45.008 [2024-12-06 10:19:51.039405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.039476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.039485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:45.008 [2024-12-06 10:19:51.039493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:45.008 [2024-12-06 10:19:51.039504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.039534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.039543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:45.008 [2024-12-06 10:19:51.039550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:45.008 [2024-12-06 10:19:51.039558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.039587] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:45.008 [2024-12-06 10:19:51.039596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.039603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:45.008 [2024-12-06 10:19:51.039611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:45.008 [2024-12-06 10:19:51.039618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.063344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.063374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:45.008 [2024-12-06 10:19:51.063385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.705 ms 00:20:45.008 [2024-12-06 10:19:51.063392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.063486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.008 [2024-12-06 10:19:51.063498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:45.008 [2024-12-06 10:19:51.063506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:45.008 [2024-12-06 10:19:51.063513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.008 [2024-12-06 10:19:51.064318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.008 [2024-12-06 10:19:51.067207] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 281.488 ms, result 0 00:20:45.008 [2024-12-06 10:19:51.068177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.008 [2024-12-06 10:19:51.081040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:45.272  [2024-12-06T10:19:51.439Z] Copying: 4096/4096 [kB] (average 16 MBps)[2024-12-06 10:19:51.326895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.272 [2024-12-06 10:19:51.335539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.335566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.272 [2024-12-06 10:19:51.335582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:45.272 [2024-12-06 10:19:51.335589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.335609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:45.272 [2024-12-06 10:19:51.338205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.338229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.272 [2024-12-06 10:19:51.338239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:20:45.272 [2024-12-06 10:19:51.338247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.340795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.340823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.272 [2024-12-06 10:19:51.340831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.526 ms 00:20:45.272 [2024-12-06 10:19:51.340839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.345183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.345205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.272 [2024-12-06 10:19:51.345214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.327 ms 00:20:45.272 [2024-12-06 10:19:51.345221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.352073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.352097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:45.272 [2024-12-06 10:19:51.352105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.826 ms 00:20:45.272 [2024-12-06 10:19:51.352112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.375246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.375275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.272 [2024-12-06 10:19:51.375285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.085 ms 00:20:45.272 [2024-12-06 10:19:51.375292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.389837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.389869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.272 [2024-12-06 10:19:51.389880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.509 ms 00:20:45.272 [2024-12-06 10:19:51.389887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.390018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.390028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.272 [2024-12-06 10:19:51.390043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:45.272 [2024-12-06 10:19:51.390050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.272 [2024-12-06 10:19:51.413363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.272 [2024-12-06 10:19:51.413389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:45.272 [2024-12-06 10:19:51.413399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.298 ms 00:20:45.272 [2024-12-06 10:19:51.413405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.541 [2024-12-06 10:19:51.436558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.541 [2024-12-06 10:19:51.436583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:45.541 [2024-12-06 10:19:51.436593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:20:45.541 [2024-12-06 10:19:51.436599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.541 [2024-12-06 10:19:51.459558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.541 [2024-12-06 10:19:51.459585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.541 [2024-12-06 10:19:51.459595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:20:45.541 [2024-12-06 10:19:51.459602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.541 [2024-12-06 10:19:51.482414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.541 [2024-12-06 10:19:51.482441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.541 [2024-12-06 10:19:51.482457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.754 ms 00:20:45.541 [2024-12-06 10:19:51.482464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.541 [2024-12-06 10:19:51.482491] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.541 [2024-12-06 10:19:51.482504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.541 [2024-12-06 10:19:51.482959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.482966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.482973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.482980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.482987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.482994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.542 [2024-12-06 10:19:51.483241] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.542 [2024-12-06 10:19:51.483248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:45.542 [2024-12-06 10:19:51.483256] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.542 [2024-12-06 10:19:51.483263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.542 [2024-12-06 10:19:51.483270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.542 [2024-12-06 10:19:51.483277] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.542 [2024-12-06 10:19:51.483284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.542 [2024-12-06 10:19:51.483291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.542 [2024-12-06 10:19:51.483300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.542 [2024-12-06 10:19:51.483307] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.542 [2024-12-06 10:19:51.483313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.542 [2024-12-06 10:19:51.483320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.542 [2024-12-06 10:19:51.483327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.542 [2024-12-06 10:19:51.483335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:20:45.542 [2024-12-06 10:19:51.483342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.495679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.542 [2024-12-06 10:19:51.495705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.542 [2024-12-06 10:19:51.495715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.310 ms 00:20:45.542 [2024-12-06 10:19:51.495722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.496078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.542 [2024-12-06 10:19:51.496087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.542 [2024-12-06 10:19:51.496095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:45.542 [2024-12-06 10:19:51.496101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.531259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.531288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.542 [2024-12-06 10:19:51.531297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.531308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.531386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.531395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.542 [2024-12-06 10:19:51.531403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.531410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.531464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.531473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.542 [2024-12-06 10:19:51.531480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.531487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.531506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.531514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.542 [2024-12-06 10:19:51.531521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.531527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.608369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.608404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.542 [2024-12-06 10:19:51.608414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.608426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.672157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.542 [2024-12-06 10:19:51.672168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.672175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.672249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.542 [2024-12-06 10:19:51.672257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.672265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.672304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.542 [2024-12-06 10:19:51.672312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.672319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.672410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.542 [2024-12-06 10:19:51.672418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.672425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.542 [2024-12-06 10:19:51.672476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.542 [2024-12-06 10:19:51.672487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.542 [2024-12-06 10:19:51.672494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.542 [2024-12-06 10:19:51.672529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.543 [2024-12-06 10:19:51.672537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.543 [2024-12-06 10:19:51.672545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.543 [2024-12-06 10:19:51.672552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.543 [2024-12-06 10:19:51.672593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.543 [2024-12-06 10:19:51.672605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.543 [2024-12-06 10:19:51.672612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.543 [2024-12-06 10:19:51.672620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.543 [2024-12-06 10:19:51.672746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 337.196 ms, result 0 00:20:46.474 00:20:46.474 00:20:46.474 10:19:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77048 00:20:46.474 10:19:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77048 00:20:46.474 10:19:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77048 ']' 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.474 10:19:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:46.474 [2024-12-06 10:19:52.453744] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:46.474 [2024-12-06 10:19:52.453853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77048 ] 00:20:46.474 [2024-12-06 10:19:52.614051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.732 [2024-12-06 10:19:52.710127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.299 10:19:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.299 10:19:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:47.299 10:19:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:47.558 [2024-12-06 10:19:53.499568] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.558 [2024-12-06 10:19:53.499623] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.558 [2024-12-06 10:19:53.670666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.558 [2024-12-06 10:19:53.670705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:47.558 [2024-12-06 10:19:53.670719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:47.558 [2024-12-06 10:19:53.670727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.558 [2024-12-06 10:19:53.673335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.558 [2024-12-06 10:19:53.673369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.558 [2024-12-06 10:19:53.673380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.588 ms 00:20:47.559 [2024-12-06 10:19:53.673388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.673470] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:47.559 [2024-12-06 10:19:53.674118] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:47.559 [2024-12-06 10:19:53.674138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.674146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.559 [2024-12-06 10:19:53.674156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:47.559 [2024-12-06 10:19:53.674165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.675283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:47.559 [2024-12-06 10:19:53.687871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.687905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:47.559 [2024-12-06 10:19:53.687917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.591 ms 00:20:47.559 [2024-12-06 10:19:53.687927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.688009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.688038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:47.559 [2024-12-06 10:19:53.688046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:47.559 [2024-12-06 10:19:53.688055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.693017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.693049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.559 [2024-12-06 10:19:53.693058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.915 ms 00:20:47.559 [2024-12-06 10:19:53.693067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.693159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.693172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.559 [2024-12-06 10:19:53.693180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:47.559 [2024-12-06 10:19:53.693191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.693213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.693223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:47.559 [2024-12-06 10:19:53.693230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.559 [2024-12-06 10:19:53.693239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.693260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:47.559 [2024-12-06 10:19:53.696439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.696470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.559 [2024-12-06 10:19:53.696481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:20:47.559 [2024-12-06 10:19:53.696489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.696528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.696537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:47.559 [2024-12-06 10:19:53.696548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:47.559 [2024-12-06 10:19:53.696555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.696575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:47.559 [2024-12-06 10:19:53.696594] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:47.559 [2024-12-06 10:19:53.696634] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:47.559 [2024-12-06 10:19:53.696648] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:47.559 [2024-12-06 10:19:53.696751] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:47.559 [2024-12-06 10:19:53.696763] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:47.559 [2024-12-06 10:19:53.696775] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:47.559 [2024-12-06 10:19:53.696785] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:47.559 [2024-12-06 10:19:53.696794] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:47.559 [2024-12-06 10:19:53.696802] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:47.559 [2024-12-06 10:19:53.696811] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:47.559 [2024-12-06 10:19:53.696818] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:47.559 [2024-12-06 10:19:53.696828] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:47.559 [2024-12-06 10:19:53.696836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.696844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:47.559 [2024-12-06 10:19:53.696852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:20:47.559 [2024-12-06 10:19:53.696862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.696949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.559 [2024-12-06 10:19:53.696958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:47.559 [2024-12-06 10:19:53.696966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:47.559 [2024-12-06 10:19:53.696974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.559 [2024-12-06 10:19:53.697073] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:47.559 [2024-12-06 10:19:53.697083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:47.559 [2024-12-06 10:19:53.697091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:47.559 [2024-12-06 10:19:53.697119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:47.559 [2024-12-06 10:19:53.697142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.559 [2024-12-06 10:19:53.697157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:47.559 [2024-12-06 10:19:53.697165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:47.559 [2024-12-06 10:19:53.697171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.559 [2024-12-06 10:19:53.697179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:47.559 [2024-12-06 10:19:53.697186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:47.559 [2024-12-06 10:19:53.697194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:47.559 [2024-12-06 10:19:53.697209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:47.559 [2024-12-06 10:19:53.697236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:47.559 [2024-12-06 10:19:53.697261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:47.559 [2024-12-06 10:19:53.697282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:47.559 [2024-12-06 10:19:53.697305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.559 [2024-12-06 10:19:53.697319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:47.559 [2024-12-06 10:19:53.697326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.559 [2024-12-06 10:19:53.697340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:47.559 [2024-12-06 10:19:53.697348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:47.559 [2024-12-06 10:19:53.697354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.559 [2024-12-06 10:19:53.697362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:47.559 [2024-12-06 10:19:53.697369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:47.559 [2024-12-06 10:19:53.697378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:47.559 [2024-12-06 10:19:53.697392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:47.559 [2024-12-06 10:19:53.697399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.559 [2024-12-06 10:19:53.697407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:47.559 [2024-12-06 10:19:53.697414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:47.559 [2024-12-06 10:19:53.697423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.560 [2024-12-06 10:19:53.697429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.560 [2024-12-06 10:19:53.697438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:47.560 [2024-12-06 10:19:53.697456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:47.560 [2024-12-06 10:19:53.697464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:47.560 [2024-12-06 10:19:53.697473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:47.560 [2024-12-06 10:19:53.697482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:47.560 [2024-12-06 10:19:53.697488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:47.560 [2024-12-06 10:19:53.697498] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:47.560 [2024-12-06 10:19:53.697507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:47.560 [2024-12-06 10:19:53.697528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:47.560 [2024-12-06 10:19:53.697536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:47.560 [2024-12-06 10:19:53.697543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:47.560 [2024-12-06 10:19:53.697552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:47.560 [2024-12-06 10:19:53.697559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:47.560 [2024-12-06 10:19:53.697567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:47.560 [2024-12-06 10:19:53.697574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:47.560 [2024-12-06 10:19:53.697583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:47.560 [2024-12-06 10:19:53.697590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:47.560 [2024-12-06 10:19:53.697629] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:47.560 [2024-12-06 10:19:53.697637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:47.560 [2024-12-06 10:19:53.697656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:47.560 [2024-12-06 10:19:53.697665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:47.560 [2024-12-06 10:19:53.697672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:47.560 [2024-12-06 10:19:53.697680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.560 [2024-12-06 10:19:53.697688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:47.560 [2024-12-06 10:19:53.697697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:20:47.560 [2024-12-06 10:19:53.697706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.723512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.723540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.819 [2024-12-06 10:19:53.723554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.737 ms 00:20:47.819 [2024-12-06 10:19:53.723562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.723674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.723684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:47.819 [2024-12-06 10:19:53.723694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:47.819 [2024-12-06 10:19:53.723701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.753905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.753934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.819 [2024-12-06 10:19:53.753945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.180 ms 00:20:47.819 [2024-12-06 10:19:53.753952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.754005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.754014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.819 [2024-12-06 10:19:53.754024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:47.819 [2024-12-06 10:19:53.754031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.754358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.754378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.819 [2024-12-06 10:19:53.754390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:47.819 [2024-12-06 10:19:53.754397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.754533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.754542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.819 [2024-12-06 10:19:53.754551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:47.819 [2024-12-06 10:19:53.754559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.768876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.768902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.819 [2024-12-06 10:19:53.768913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.295 ms 00:20:47.819 [2024-12-06 10:19:53.768921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.795023] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:47.819 [2024-12-06 10:19:53.795069] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:47.819 [2024-12-06 10:19:53.795093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.795104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:47.819 [2024-12-06 10:19:53.795119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.065 ms 00:20:47.819 [2024-12-06 10:19:53.795136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.819964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.819995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:47.819 [2024-12-06 10:19:53.820007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.731 ms 00:20:47.819 [2024-12-06 10:19:53.820029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.831839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.831866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:47.819 [2024-12-06 10:19:53.831879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.740 ms 00:20:47.819 [2024-12-06 10:19:53.831886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.843585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.843612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:47.819 [2024-12-06 10:19:53.843624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.634 ms 00:20:47.819 [2024-12-06 10:19:53.843631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.844259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.844278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:47.819 [2024-12-06 10:19:53.844289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:20:47.819 [2024-12-06 10:19:53.844297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.900370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.819 [2024-12-06 10:19:53.900408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:47.819 [2024-12-06 10:19:53.900421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.049 ms 00:20:47.819 [2024-12-06 10:19:53.900428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.819 [2024-12-06 10:19:53.910736] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:47.820 [2024-12-06 10:19:53.924858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.924898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:47.820 [2024-12-06 10:19:53.924909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:20:47.820 [2024-12-06 10:19:53.924918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.924987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.924999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:47.820 [2024-12-06 10:19:53.925007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:47.820 [2024-12-06 10:19:53.925017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.925065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.925074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:47.820 [2024-12-06 10:19:53.925084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:47.820 [2024-12-06 10:19:53.925093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.925117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.925126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:47.820 [2024-12-06 10:19:53.925134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.820 [2024-12-06 10:19:53.925146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.925179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:47.820 [2024-12-06 10:19:53.925192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.925200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:47.820 [2024-12-06 10:19:53.925209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:47.820 [2024-12-06 10:19:53.925218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.948756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.948787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:47.820 [2024-12-06 10:19:53.948799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.514 ms 00:20:47.820 [2024-12-06 10:19:53.948807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.948891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.820 [2024-12-06 10:19:53.948901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:47.820 [2024-12-06 10:19:53.948913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:47.820 [2024-12-06 10:19:53.948921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.820 [2024-12-06 10:19:53.949671] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.820 [2024-12-06 10:19:53.952672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.723 ms, result 0 00:20:47.820 [2024-12-06 10:19:53.954902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.820 Some configs were skipped because the RPC state that can call them passed over. 00:20:48.078 10:19:53 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:48.078 [2024-12-06 10:19:54.182350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.078 [2024-12-06 10:19:54.182395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:48.078 [2024-12-06 10:19:54.182407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.595 ms 00:20:48.078 [2024-12-06 10:19:54.182417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.078 [2024-12-06 10:19:54.182462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.697 ms, result 0 00:20:48.078 true 00:20:48.078 10:19:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:48.337 [2024-12-06 10:19:54.382590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.337 [2024-12-06 10:19:54.382626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:48.337 [2024-12-06 10:19:54.382639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.528 ms 00:20:48.337 [2024-12-06 10:19:54.382646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.337 [2024-12-06 10:19:54.382681] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.619 ms, result 0 00:20:48.337 true 00:20:48.337 10:19:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77048 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77048 ']' 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77048 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77048 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77048' 00:20:48.337 killing process with pid 77048 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77048 00:20:48.337 10:19:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77048 00:20:49.272 [2024-12-06 10:19:55.122786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.122836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.272 [2024-12-06 10:19:55.122849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:49.272 [2024-12-06 10:19:55.122860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.122883] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:49.272 [2024-12-06 10:19:55.125499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.125526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.272 [2024-12-06 10:19:55.125540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:20:49.272 [2024-12-06 10:19:55.125548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.125845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.125860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.272 [2024-12-06 10:19:55.125870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:49.272 [2024-12-06 10:19:55.125877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.130338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.130367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.272 [2024-12-06 10:19:55.130378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.440 ms 00:20:49.272 [2024-12-06 10:19:55.130385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.137330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.137357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.272 [2024-12-06 10:19:55.137371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.909 ms 00:20:49.272 [2024-12-06 10:19:55.137379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.147247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.147282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.272 [2024-12-06 10:19:55.147294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.814 ms 00:20:49.272 [2024-12-06 10:19:55.147301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.155161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.155192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.272 [2024-12-06 10:19:55.155203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.821 ms 00:20:49.272 [2024-12-06 10:19:55.155210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.155350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.155361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.272 [2024-12-06 10:19:55.155371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:49.272 [2024-12-06 10:19:55.155378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.272 [2024-12-06 10:19:55.164891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.272 [2024-12-06 10:19:55.164914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.272 [2024-12-06 10:19:55.164923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.492 ms 00:20:49.272 [2024-12-06 10:19:55.164928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.273 [2024-12-06 10:19:55.172456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.273 [2024-12-06 10:19:55.172480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.273 [2024-12-06 10:19:55.172493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.490 ms 00:20:49.273 [2024-12-06 10:19:55.172499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.273 [2024-12-06 10:19:55.179464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.273 [2024-12-06 10:19:55.179487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.273 [2024-12-06 10:19:55.179495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.922 ms 00:20:49.273 [2024-12-06 10:19:55.179500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.273 [2024-12-06 10:19:55.186409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.273 [2024-12-06 10:19:55.186432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.273 [2024-12-06 10:19:55.186440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:20:49.273 [2024-12-06 10:19:55.186453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.273 [2024-12-06 10:19:55.186481] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.273 [2024-12-06 10:19:55.186492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.186999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.273 [2024-12-06 10:19:55.187004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.274 [2024-12-06 10:19:55.187157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.274 [2024-12-06 10:19:55.187166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:20:49.274 [2024-12-06 10:19:55.187172] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.274 [2024-12-06 10:19:55.187178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.274 [2024-12-06 10:19:55.187184] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.274 [2024-12-06 10:19:55.187191] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.274 [2024-12-06 10:19:55.187196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.274 [2024-12-06 10:19:55.187204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.274 [2024-12-06 10:19:55.187209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.274 [2024-12-06 10:19:55.187216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.274 [2024-12-06 10:19:55.187220] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.274 [2024-12-06 10:19:55.187228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.274 [2024-12-06 10:19:55.187233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.274 [2024-12-06 10:19:55.187241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:20:49.274 [2024-12-06 10:19:55.187248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.196699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.274 [2024-12-06 10:19:55.196720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.274 [2024-12-06 10:19:55.196730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.435 ms 00:20:49.274 [2024-12-06 10:19:55.196737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.197019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.274 [2024-12-06 10:19:55.197032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.274 [2024-12-06 10:19:55.197042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:20:49.274 [2024-12-06 10:19:55.197047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.231990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.232049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.274 [2024-12-06 10:19:55.232058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.232064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.232143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.232150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.274 [2024-12-06 10:19:55.232160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.232166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.232205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.232212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.274 [2024-12-06 10:19:55.232220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.232226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.232241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.232247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.274 [2024-12-06 10:19:55.232254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.232261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.291663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.291693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.274 [2024-12-06 10:19:55.291702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.291708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.274 [2024-12-06 10:19:55.341193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.274 [2024-12-06 10:19:55.341273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.274 [2024-12-06 10:19:55.341316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.274 [2024-12-06 10:19:55.341405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.274 [2024-12-06 10:19:55.341469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.274 [2024-12-06 10:19:55.341523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.274 [2024-12-06 10:19:55.341570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.274 [2024-12-06 10:19:55.341577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.274 [2024-12-06 10:19:55.341583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.274 [2024-12-06 10:19:55.341687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.888 ms, result 0 00:20:49.839 10:19:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.839 [2024-12-06 10:19:55.924415] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:20:49.839 [2024-12-06 10:19:55.924541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77101 ] 00:20:50.097 [2024-12-06 10:19:56.081289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.097 [2024-12-06 10:19:56.156850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.355 [2024-12-06 10:19:56.368272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.355 [2024-12-06 10:19:56.368322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.355 [2024-12-06 10:19:56.516680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.355 [2024-12-06 10:19:56.516720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.355 [2024-12-06 10:19:56.516731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.355 [2024-12-06 10:19:56.516740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.355 [2024-12-06 10:19:56.519373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.355 [2024-12-06 10:19:56.519404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.355 [2024-12-06 10:19:56.519414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.618 ms 00:20:50.355 [2024-12-06 10:19:56.519421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.355 [2024-12-06 10:19:56.519511] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.355 [2024-12-06 10:19:56.520404] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.355 [2024-12-06 10:19:56.520443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.355 [2024-12-06 10:19:56.520467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.355 [2024-12-06 10:19:56.520477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:20:50.355 [2024-12-06 10:19:56.520485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.521607] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.615 [2024-12-06 10:19:56.534302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.534332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.615 [2024-12-06 10:19:56.534343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.697 ms 00:20:50.615 [2024-12-06 10:19:56.534351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.534436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.534462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.615 [2024-12-06 10:19:56.534471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:50.615 [2024-12-06 10:19:56.534478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.539307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.539331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.615 [2024-12-06 10:19:56.539341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.787 ms 00:20:50.615 [2024-12-06 10:19:56.539348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.539433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.539461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.615 [2024-12-06 10:19:56.539471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:50.615 [2024-12-06 10:19:56.539478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.539505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.539513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.615 [2024-12-06 10:19:56.539521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.615 [2024-12-06 10:19:56.539527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.539547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:50.615 [2024-12-06 10:19:56.542879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.542903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.615 [2024-12-06 10:19:56.542912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:20:50.615 [2024-12-06 10:19:56.542919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.542954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.542962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.615 [2024-12-06 10:19:56.542970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:50.615 [2024-12-06 10:19:56.542977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.542995] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.615 [2024-12-06 10:19:56.543012] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:50.615 [2024-12-06 10:19:56.543046] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.615 [2024-12-06 10:19:56.543060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:50.615 [2024-12-06 10:19:56.543162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.615 [2024-12-06 10:19:56.543171] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.615 [2024-12-06 10:19:56.543181] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:50.615 [2024-12-06 10:19:56.543193] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543201] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543209] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:50.615 [2024-12-06 10:19:56.543216] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.615 [2024-12-06 10:19:56.543223] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.615 [2024-12-06 10:19:56.543230] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.615 [2024-12-06 10:19:56.543237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.543244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.615 [2024-12-06 10:19:56.543251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:50.615 [2024-12-06 10:19:56.543257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.543345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.615 [2024-12-06 10:19:56.543361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.615 [2024-12-06 10:19:56.543368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:50.615 [2024-12-06 10:19:56.543376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.615 [2024-12-06 10:19:56.543498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.615 [2024-12-06 10:19:56.543509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.615 [2024-12-06 10:19:56.543517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.615 [2024-12-06 10:19:56.543539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.615 [2024-12-06 10:19:56.543562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.615 [2024-12-06 10:19:56.543576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.615 [2024-12-06 10:19:56.543588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:50.615 [2024-12-06 10:19:56.543594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.615 [2024-12-06 10:19:56.543601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.615 [2024-12-06 10:19:56.543608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:50.615 [2024-12-06 10:19:56.543614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.615 [2024-12-06 10:19:56.543627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.615 [2024-12-06 10:19:56.543648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.615 [2024-12-06 10:19:56.543668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.615 [2024-12-06 10:19:56.543687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.615 [2024-12-06 10:19:56.543700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.615 [2024-12-06 10:19:56.543706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:50.615 [2024-12-06 10:19:56.543712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.616 [2024-12-06 10:19:56.543719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.616 [2024-12-06 10:19:56.543726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:50.616 [2024-12-06 10:19:56.543733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.616 [2024-12-06 10:19:56.543739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.616 [2024-12-06 10:19:56.543745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:50.616 [2024-12-06 10:19:56.543751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.616 [2024-12-06 10:19:56.543758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.616 [2024-12-06 10:19:56.543765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:50.616 [2024-12-06 10:19:56.543771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.616 [2024-12-06 10:19:56.543778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.616 [2024-12-06 10:19:56.543785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:50.616 [2024-12-06 10:19:56.543792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.616 [2024-12-06 10:19:56.543798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.616 [2024-12-06 10:19:56.543806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.616 [2024-12-06 10:19:56.543815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.616 [2024-12-06 10:19:56.543822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.616 [2024-12-06 10:19:56.543829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.616 [2024-12-06 10:19:56.543836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.616 [2024-12-06 10:19:56.543842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.616 [2024-12-06 10:19:56.543849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.616 [2024-12-06 10:19:56.543855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.616 [2024-12-06 10:19:56.543862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.616 [2024-12-06 10:19:56.543870] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.616 [2024-12-06 10:19:56.543878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:50.616 [2024-12-06 10:19:56.543893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:50.616 [2024-12-06 10:19:56.543900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:50.616 [2024-12-06 10:19:56.543907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:50.616 [2024-12-06 10:19:56.543914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:50.616 [2024-12-06 10:19:56.543920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:50.616 [2024-12-06 10:19:56.543927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:50.616 [2024-12-06 10:19:56.543934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:50.616 [2024-12-06 10:19:56.543940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:50.616 [2024-12-06 10:19:56.543948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:50.616 [2024-12-06 10:19:56.543982] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.616 [2024-12-06 10:19:56.543989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.543997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.616 [2024-12-06 10:19:56.544005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.616 [2024-12-06 10:19:56.544022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.616 [2024-12-06 10:19:56.544030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.616 [2024-12-06 10:19:56.544037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.544047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.616 [2024-12-06 10:19:56.544055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:20:50.616 [2024-12-06 10:19:56.544062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.569658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.569684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.616 [2024-12-06 10:19:56.569694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.542 ms 00:20:50.616 [2024-12-06 10:19:56.569701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.569820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.569830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.616 [2024-12-06 10:19:56.569838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:50.616 [2024-12-06 10:19:56.569845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.619151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.619189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.616 [2024-12-06 10:19:56.619203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.286 ms 00:20:50.616 [2024-12-06 10:19:56.619211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.619296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.619308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.616 [2024-12-06 10:19:56.619316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.616 [2024-12-06 10:19:56.619323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.619674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.619689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.616 [2024-12-06 10:19:56.619704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:50.616 [2024-12-06 10:19:56.619712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.619837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.619851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.616 [2024-12-06 10:19:56.619859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:50.616 [2024-12-06 10:19:56.619866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.633155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.633187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.616 [2024-12-06 10:19:56.633196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.269 ms 00:20:50.616 [2024-12-06 10:19:56.633204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.645960] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:50.616 [2024-12-06 10:19:56.645993] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.616 [2024-12-06 10:19:56.646005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.646012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.616 [2024-12-06 10:19:56.646021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:20:50.616 [2024-12-06 10:19:56.646027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.670402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.670435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.616 [2024-12-06 10:19:56.670457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.305 ms 00:20:50.616 [2024-12-06 10:19:56.670466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.682215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.682339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.616 [2024-12-06 10:19:56.682355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.684 ms 00:20:50.616 [2024-12-06 10:19:56.682361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.694065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.694095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.616 [2024-12-06 10:19:56.694106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.645 ms 00:20:50.616 [2024-12-06 10:19:56.694112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.694725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.694743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.616 [2024-12-06 10:19:56.694752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:50.616 [2024-12-06 10:19:56.694760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.616 [2024-12-06 10:19:56.751297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.616 [2024-12-06 10:19:56.751340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.616 [2024-12-06 10:19:56.751353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.514 ms 00:20:50.617 [2024-12-06 10:19:56.751361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.617 [2024-12-06 10:19:56.761861] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:50.617 [2024-12-06 10:19:56.775505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.617 [2024-12-06 10:19:56.775540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.617 [2024-12-06 10:19:56.775552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.059 ms 00:20:50.617 [2024-12-06 10:19:56.775564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.617 [2024-12-06 10:19:56.775634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.617 [2024-12-06 10:19:56.775644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.617 [2024-12-06 10:19:56.775653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:50.617 [2024-12-06 10:19:56.775661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.617 [2024-12-06 10:19:56.775704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.617 [2024-12-06 10:19:56.775713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.617 [2024-12-06 10:19:56.775720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:50.617 [2024-12-06 10:19:56.775730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.617 [2024-12-06 10:19:56.775760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.617 [2024-12-06 10:19:56.775768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.617 [2024-12-06 10:19:56.775776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.617 [2024-12-06 10:19:56.775783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.617 [2024-12-06 10:19:56.775811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.617 [2024-12-06 10:19:56.775820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.617 [2024-12-06 10:19:56.775828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.617 [2024-12-06 10:19:56.775835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.617 [2024-12-06 10:19:56.775842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.875 [2024-12-06 10:19:56.799507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.875 [2024-12-06 10:19:56.799635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.875 [2024-12-06 10:19:56.799652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.642 ms 00:20:50.875 [2024-12-06 10:19:56.799660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.875 [2024-12-06 10:19:56.799738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.875 [2024-12-06 10:19:56.799748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.875 [2024-12-06 10:19:56.799757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:50.875 [2024-12-06 10:19:56.799764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.875 [2024-12-06 10:19:56.800567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.875 [2024-12-06 10:19:56.803679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 283.587 ms, result 0 00:20:50.875 [2024-12-06 10:19:56.804898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.875 [2024-12-06 10:19:56.817731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.808  [2024-12-06T10:19:58.907Z] Copying: 26/256 [MB] (26 MBps) [2024-12-06T10:20:00.281Z] Copying: 49/256 [MB] (23 MBps) [2024-12-06T10:20:01.215Z] Copying: 68/256 [MB] (18 MBps) [2024-12-06T10:20:02.150Z] Copying: 87/256 [MB] (19 MBps) [2024-12-06T10:20:03.085Z] Copying: 110/256 [MB] (22 MBps) [2024-12-06T10:20:04.019Z] Copying: 127/256 [MB] (17 MBps) [2024-12-06T10:20:04.952Z] Copying: 140/256 [MB] (12 MBps) [2024-12-06T10:20:05.887Z] Copying: 162/256 [MB] (22 MBps) [2024-12-06T10:20:07.259Z] Copying: 193/256 [MB] (30 MBps) [2024-12-06T10:20:08.193Z] Copying: 222/256 [MB] (29 MBps) [2024-12-06T10:20:08.759Z] Copying: 244/256 [MB] (21 MBps) [2024-12-06T10:20:09.329Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-06 10:20:09.090874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:03.162 [2024-12-06 10:20:09.102984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.103021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:03.162 [2024-12-06 10:20:09.103039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:03.162 [2024-12-06 10:20:09.103048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.103070] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:03.162 [2024-12-06 10:20:09.106094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.106122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:03.162 [2024-12-06 10:20:09.106132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.011 ms 00:21:03.162 [2024-12-06 10:20:09.106140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.106404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.106414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:03.162 [2024-12-06 10:20:09.106422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:21:03.162 [2024-12-06 10:20:09.106429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.110354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.110374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:03.162 [2024-12-06 10:20:09.110384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:21:03.162 [2024-12-06 10:20:09.110392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.117365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.117525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:03.162 [2024-12-06 10:20:09.117543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.955 ms 00:21:03.162 [2024-12-06 10:20:09.117553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.140724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.140846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:03.162 [2024-12-06 10:20:09.140862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.107 ms 00:21:03.162 [2024-12-06 10:20:09.140869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.154932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.154962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:03.162 [2024-12-06 10:20:09.154978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.031 ms 00:21:03.162 [2024-12-06 10:20:09.154985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.155121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.155131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:03.162 [2024-12-06 10:20:09.155146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:03.162 [2024-12-06 10:20:09.155154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.178935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.179051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:03.162 [2024-12-06 10:20:09.179066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.766 ms 00:21:03.162 [2024-12-06 10:20:09.179073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.202134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.202244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:03.162 [2024-12-06 10:20:09.202258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:21:03.162 [2024-12-06 10:20:09.202265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.224797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.224903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:03.162 [2024-12-06 10:20:09.224918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.503 ms 00:21:03.162 [2024-12-06 10:20:09.224925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.247692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.162 [2024-12-06 10:20:09.247798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:03.162 [2024-12-06 10:20:09.247812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:21:03.162 [2024-12-06 10:20:09.247819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.162 [2024-12-06 10:20:09.247847] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:03.162 [2024-12-06 10:20:09.247860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.247996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:03.162 [2024-12-06 10:20:09.248154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:03.163 [2024-12-06 10:20:09.248642] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:03.163 [2024-12-06 10:20:09.248649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5840c6c1-3aa6-4cb7-a72c-fa8a3c10b88d 00:21:03.163 [2024-12-06 10:20:09.248657] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:03.163 [2024-12-06 10:20:09.248664] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:03.163 [2024-12-06 10:20:09.248671] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:03.163 [2024-12-06 10:20:09.248679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:03.163 [2024-12-06 10:20:09.248686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:03.163 [2024-12-06 10:20:09.248695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:03.163 [2024-12-06 10:20:09.248704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:03.163 [2024-12-06 10:20:09.248710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:03.163 [2024-12-06 10:20:09.248716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:03.163 [2024-12-06 10:20:09.248723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.163 [2024-12-06 10:20:09.248730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:03.163 [2024-12-06 10:20:09.248739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:21:03.163 [2024-12-06 10:20:09.248745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.163 [2024-12-06 10:20:09.260992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.163 [2024-12-06 10:20:09.261016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:03.163 [2024-12-06 10:20:09.261026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.209 ms 00:21:03.163 [2024-12-06 10:20:09.261034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.163 [2024-12-06 10:20:09.261387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.163 [2024-12-06 10:20:09.261400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:03.163 [2024-12-06 10:20:09.261409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:21:03.163 [2024-12-06 10:20:09.261416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.163 [2024-12-06 10:20:09.296078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.163 [2024-12-06 10:20:09.296110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.163 [2024-12-06 10:20:09.296120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.163 [2024-12-06 10:20:09.296132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.163 [2024-12-06 10:20:09.296215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.163 [2024-12-06 10:20:09.296223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.163 [2024-12-06 10:20:09.296231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.163 [2024-12-06 10:20:09.296238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.163 [2024-12-06 10:20:09.296280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.163 [2024-12-06 10:20:09.296289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.163 [2024-12-06 10:20:09.296296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.163 [2024-12-06 10:20:09.296303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.164 [2024-12-06 10:20:09.296321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.164 [2024-12-06 10:20:09.296329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.164 [2024-12-06 10:20:09.296336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.164 [2024-12-06 10:20:09.296343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.373357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.373392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.474 [2024-12-06 10:20:09.373402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.373410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.435900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.435936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.474 [2024-12-06 10:20:09.435947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.435954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.474 [2024-12-06 10:20:09.436037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.474 [2024-12-06 10:20:09.436089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.474 [2024-12-06 10:20:09.436203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:03.474 [2024-12-06 10:20:09.436261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.474 [2024-12-06 10:20:09.436317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.474 [2024-12-06 10:20:09.436364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:03.474 [2024-12-06 10:20:09.436376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.474 [2024-12-06 10:20:09.436383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:03.474 [2024-12-06 10:20:09.436391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.475 [2024-12-06 10:20:09.436541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 333.558 ms, result 0 00:21:04.051 00:21:04.051 00:21:04.051 10:20:10 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:04.616 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:04.616 Process with pid 77048 is not found 00:21:04.616 10:20:10 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77048 00:21:04.616 10:20:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77048 ']' 00:21:04.616 10:20:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77048 00:21:04.616 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77048) - No such process 00:21:04.616 10:20:10 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77048 is not found' 00:21:04.616 ************************************ 00:21:04.616 END TEST ftl_trim 00:21:04.616 ************************************ 00:21:04.616 00:21:04.616 real 1m7.431s 00:21:04.616 user 1m24.596s 00:21:04.616 sys 0m14.136s 00:21:04.616 10:20:10 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.616 10:20:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 10:20:10 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:04.875 10:20:10 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:04.875 10:20:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.875 10:20:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:04.875 ************************************ 00:21:04.875 START TEST ftl_restore 00:21:04.875 ************************************ 00:21:04.875 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:04.875 * Looking for test storage... 00:21:04.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.875 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.875 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.875 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.875 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.875 10:20:10 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.876 10:20:10 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.876 --rc genhtml_branch_coverage=1 00:21:04.876 --rc genhtml_function_coverage=1 00:21:04.876 --rc genhtml_legend=1 00:21:04.876 --rc geninfo_all_blocks=1 00:21:04.876 --rc geninfo_unexecuted_blocks=1 00:21:04.876 00:21:04.876 ' 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.876 --rc genhtml_branch_coverage=1 00:21:04.876 --rc genhtml_function_coverage=1 00:21:04.876 --rc genhtml_legend=1 00:21:04.876 --rc geninfo_all_blocks=1 00:21:04.876 --rc geninfo_unexecuted_blocks=1 00:21:04.876 00:21:04.876 ' 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.876 --rc genhtml_branch_coverage=1 00:21:04.876 --rc genhtml_function_coverage=1 00:21:04.876 --rc genhtml_legend=1 00:21:04.876 --rc geninfo_all_blocks=1 00:21:04.876 --rc geninfo_unexecuted_blocks=1 00:21:04.876 00:21:04.876 ' 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.876 --rc genhtml_branch_coverage=1 00:21:04.876 --rc genhtml_function_coverage=1 00:21:04.876 --rc genhtml_legend=1 00:21:04.876 --rc geninfo_all_blocks=1 00:21:04.876 --rc geninfo_unexecuted_blocks=1 00:21:04.876 00:21:04.876 ' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.K4gXCbCDDr 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77325 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77325 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77325 ']' 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.876 10:20:10 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.876 10:20:10 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:05.134 [2024-12-06 10:20:11.070140] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:21:05.134 [2024-12-06 10:20:11.070367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77325 ] 00:21:05.134 [2024-12-06 10:20:11.230742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.390 [2024-12-06 10:20:11.326058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.954 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.954 10:20:11 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:05.954 10:20:11 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:06.212 10:20:12 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:06.212 10:20:12 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:06.212 10:20:12 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:06.212 { 00:21:06.212 "name": "nvme0n1", 00:21:06.212 "aliases": [ 00:21:06.212 "0c799918-52d5-4c46-a02d-bfb894b15b0d" 00:21:06.212 ], 00:21:06.212 "product_name": "NVMe disk", 00:21:06.212 "block_size": 4096, 00:21:06.212 "num_blocks": 1310720, 00:21:06.212 "uuid": "0c799918-52d5-4c46-a02d-bfb894b15b0d", 00:21:06.212 "numa_id": -1, 00:21:06.212 "assigned_rate_limits": { 00:21:06.212 "rw_ios_per_sec": 0, 00:21:06.212 "rw_mbytes_per_sec": 0, 00:21:06.212 "r_mbytes_per_sec": 0, 00:21:06.212 "w_mbytes_per_sec": 0 00:21:06.212 }, 00:21:06.212 "claimed": true, 00:21:06.212 "claim_type": "read_many_write_one", 00:21:06.212 "zoned": false, 00:21:06.212 "supported_io_types": { 00:21:06.212 "read": true, 00:21:06.212 "write": true, 00:21:06.212 "unmap": true, 00:21:06.212 "flush": true, 00:21:06.212 "reset": true, 00:21:06.212 "nvme_admin": true, 00:21:06.212 "nvme_io": true, 00:21:06.212 "nvme_io_md": false, 00:21:06.212 "write_zeroes": true, 00:21:06.212 "zcopy": false, 00:21:06.212 "get_zone_info": false, 00:21:06.212 "zone_management": false, 00:21:06.212 "zone_append": false, 00:21:06.212 "compare": true, 00:21:06.212 "compare_and_write": false, 00:21:06.212 "abort": true, 00:21:06.212 "seek_hole": false, 00:21:06.212 "seek_data": false, 00:21:06.212 "copy": true, 00:21:06.212 "nvme_iov_md": false 00:21:06.212 }, 00:21:06.212 "driver_specific": { 00:21:06.212 "nvme": [ 00:21:06.212 { 00:21:06.212 "pci_address": "0000:00:11.0", 00:21:06.212 "trid": { 00:21:06.212 "trtype": "PCIe", 00:21:06.212 "traddr": "0000:00:11.0" 00:21:06.212 }, 00:21:06.212 "ctrlr_data": { 00:21:06.212 "cntlid": 0, 00:21:06.212 "vendor_id": "0x1b36", 00:21:06.212 "model_number": "QEMU NVMe Ctrl", 00:21:06.212 "serial_number": "12341", 00:21:06.212 "firmware_revision": "8.0.0", 00:21:06.212 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:06.212 "oacs": { 00:21:06.212 "security": 0, 00:21:06.212 "format": 1, 00:21:06.212 "firmware": 0, 00:21:06.212 "ns_manage": 1 00:21:06.212 }, 00:21:06.212 "multi_ctrlr": false, 00:21:06.212 "ana_reporting": false 00:21:06.212 }, 00:21:06.212 "vs": { 00:21:06.212 "nvme_version": "1.4" 00:21:06.212 }, 00:21:06.212 "ns_data": { 00:21:06.212 "id": 1, 00:21:06.212 "can_share": false 00:21:06.212 } 00:21:06.212 } 00:21:06.212 ], 00:21:06.212 "mp_policy": "active_passive" 00:21:06.212 } 00:21:06.212 } 00:21:06.212 ]' 00:21:06.212 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:06.470 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:06.470 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:06.470 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:06.470 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:06.470 10:20:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=78c9fde1-05a3-4585-82ad-c40d72b2f9bf 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:06.470 10:20:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78c9fde1-05a3-4585-82ad-c40d72b2f9bf 00:21:06.727 10:20:12 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:07.040 10:20:12 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=71f2b84d-ae6b-45de-8191-eea688c1bbb1 00:21:07.040 10:20:12 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 71f2b84d-ae6b-45de-8191-eea688c1bbb1 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:07.299 10:20:13 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.299 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.299 { 00:21:07.299 "name": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:07.299 "aliases": [ 00:21:07.299 "lvs/nvme0n1p0" 00:21:07.299 ], 00:21:07.299 "product_name": "Logical Volume", 00:21:07.299 "block_size": 4096, 00:21:07.299 "num_blocks": 26476544, 00:21:07.299 "uuid": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:07.299 "assigned_rate_limits": { 00:21:07.299 "rw_ios_per_sec": 0, 00:21:07.299 "rw_mbytes_per_sec": 0, 00:21:07.299 "r_mbytes_per_sec": 0, 00:21:07.299 "w_mbytes_per_sec": 0 00:21:07.299 }, 00:21:07.299 "claimed": false, 00:21:07.299 "zoned": false, 00:21:07.299 "supported_io_types": { 00:21:07.299 "read": true, 00:21:07.299 "write": true, 00:21:07.299 "unmap": true, 00:21:07.299 "flush": false, 00:21:07.299 "reset": true, 00:21:07.299 "nvme_admin": false, 00:21:07.299 "nvme_io": false, 00:21:07.299 "nvme_io_md": false, 00:21:07.299 "write_zeroes": true, 00:21:07.299 "zcopy": false, 00:21:07.299 "get_zone_info": false, 00:21:07.299 "zone_management": false, 00:21:07.299 "zone_append": false, 00:21:07.299 "compare": false, 00:21:07.299 "compare_and_write": false, 00:21:07.299 "abort": false, 00:21:07.299 "seek_hole": true, 00:21:07.299 "seek_data": true, 00:21:07.299 "copy": false, 00:21:07.299 "nvme_iov_md": false 00:21:07.299 }, 00:21:07.299 "driver_specific": { 00:21:07.299 "lvol": { 00:21:07.299 "lvol_store_uuid": "71f2b84d-ae6b-45de-8191-eea688c1bbb1", 00:21:07.299 "base_bdev": "nvme0n1", 00:21:07.299 "thin_provision": true, 00:21:07.299 "num_allocated_clusters": 0, 00:21:07.299 "snapshot": false, 00:21:07.299 "clone": false, 00:21:07.300 "esnap_clone": false 00:21:07.300 } 00:21:07.300 } 00:21:07.300 } 00:21:07.300 ]' 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.300 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.300 10:20:13 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:07.300 10:20:13 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:07.300 10:20:13 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:07.557 10:20:13 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:07.557 10:20:13 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:07.558 10:20:13 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.558 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.558 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:07.558 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:07.558 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:07.558 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:07.815 { 00:21:07.815 "name": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:07.815 "aliases": [ 00:21:07.815 "lvs/nvme0n1p0" 00:21:07.815 ], 00:21:07.815 "product_name": "Logical Volume", 00:21:07.815 "block_size": 4096, 00:21:07.815 "num_blocks": 26476544, 00:21:07.815 "uuid": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:07.815 "assigned_rate_limits": { 00:21:07.815 "rw_ios_per_sec": 0, 00:21:07.815 "rw_mbytes_per_sec": 0, 00:21:07.815 "r_mbytes_per_sec": 0, 00:21:07.815 "w_mbytes_per_sec": 0 00:21:07.815 }, 00:21:07.815 "claimed": false, 00:21:07.815 "zoned": false, 00:21:07.815 "supported_io_types": { 00:21:07.815 "read": true, 00:21:07.815 "write": true, 00:21:07.815 "unmap": true, 00:21:07.815 "flush": false, 00:21:07.815 "reset": true, 00:21:07.815 "nvme_admin": false, 00:21:07.815 "nvme_io": false, 00:21:07.815 "nvme_io_md": false, 00:21:07.815 "write_zeroes": true, 00:21:07.815 "zcopy": false, 00:21:07.815 "get_zone_info": false, 00:21:07.815 "zone_management": false, 00:21:07.815 "zone_append": false, 00:21:07.815 "compare": false, 00:21:07.815 "compare_and_write": false, 00:21:07.815 "abort": false, 00:21:07.815 "seek_hole": true, 00:21:07.815 "seek_data": true, 00:21:07.815 "copy": false, 00:21:07.815 "nvme_iov_md": false 00:21:07.815 }, 00:21:07.815 "driver_specific": { 00:21:07.815 "lvol": { 00:21:07.815 "lvol_store_uuid": "71f2b84d-ae6b-45de-8191-eea688c1bbb1", 00:21:07.815 "base_bdev": "nvme0n1", 00:21:07.815 "thin_provision": true, 00:21:07.815 "num_allocated_clusters": 0, 00:21:07.815 "snapshot": false, 00:21:07.815 "clone": false, 00:21:07.815 "esnap_clone": false 00:21:07.815 } 00:21:07.815 } 00:21:07.815 } 00:21:07.815 ]' 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:07.815 10:20:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:07.815 10:20:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:07.815 10:20:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:08.072 10:20:14 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:08.072 10:20:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:08.072 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:08.072 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.072 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:08.072 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:08.072 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebcf1c3-9727-4ea8-801c-6c24fbbf7634 00:21:08.332 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.332 { 00:21:08.332 "name": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:08.332 "aliases": [ 00:21:08.332 "lvs/nvme0n1p0" 00:21:08.332 ], 00:21:08.332 "product_name": "Logical Volume", 00:21:08.332 "block_size": 4096, 00:21:08.332 "num_blocks": 26476544, 00:21:08.332 "uuid": "aebcf1c3-9727-4ea8-801c-6c24fbbf7634", 00:21:08.332 "assigned_rate_limits": { 00:21:08.332 "rw_ios_per_sec": 0, 00:21:08.332 "rw_mbytes_per_sec": 0, 00:21:08.332 "r_mbytes_per_sec": 0, 00:21:08.332 "w_mbytes_per_sec": 0 00:21:08.332 }, 00:21:08.332 "claimed": false, 00:21:08.332 "zoned": false, 00:21:08.332 "supported_io_types": { 00:21:08.332 "read": true, 00:21:08.332 "write": true, 00:21:08.332 "unmap": true, 00:21:08.332 "flush": false, 00:21:08.332 "reset": true, 00:21:08.332 "nvme_admin": false, 00:21:08.332 "nvme_io": false, 00:21:08.332 "nvme_io_md": false, 00:21:08.332 "write_zeroes": true, 00:21:08.332 "zcopy": false, 00:21:08.332 "get_zone_info": false, 00:21:08.332 "zone_management": false, 00:21:08.332 "zone_append": false, 00:21:08.333 "compare": false, 00:21:08.333 "compare_and_write": false, 00:21:08.333 "abort": false, 00:21:08.333 "seek_hole": true, 00:21:08.333 "seek_data": true, 00:21:08.333 "copy": false, 00:21:08.333 "nvme_iov_md": false 00:21:08.333 }, 00:21:08.333 "driver_specific": { 00:21:08.333 "lvol": { 00:21:08.333 "lvol_store_uuid": "71f2b84d-ae6b-45de-8191-eea688c1bbb1", 00:21:08.333 "base_bdev": "nvme0n1", 00:21:08.333 "thin_provision": true, 00:21:08.333 "num_allocated_clusters": 0, 00:21:08.333 "snapshot": false, 00:21:08.333 "clone": false, 00:21:08.333 "esnap_clone": false 00:21:08.333 } 00:21:08.333 } 00:21:08.333 } 00:21:08.333 ]' 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:08.333 10:20:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aebcf1c3-9727-4ea8-801c-6c24fbbf7634 --l2p_dram_limit 10' 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:08.333 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:08.333 10:20:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aebcf1c3-9727-4ea8-801c-6c24fbbf7634 --l2p_dram_limit 10 -c nvc0n1p0 00:21:08.592 [2024-12-06 10:20:14.594189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.594335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:08.592 [2024-12-06 10:20:14.594360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:08.592 [2024-12-06 10:20:14.594369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.594434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.594461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:08.592 [2024-12-06 10:20:14.594473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:08.592 [2024-12-06 10:20:14.594480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.594506] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:08.592 [2024-12-06 10:20:14.595218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:08.592 [2024-12-06 10:20:14.595238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.595245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:08.592 [2024-12-06 10:20:14.595255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:21:08.592 [2024-12-06 10:20:14.595263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.595293] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 02e574f2-a660-4801-8ddf-67882ec9f339 00:21:08.592 [2024-12-06 10:20:14.596687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.596731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:08.592 [2024-12-06 10:20:14.596744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:08.592 [2024-12-06 10:20:14.596753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.601943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.601975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:08.592 [2024-12-06 10:20:14.601985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.140 ms 00:21:08.592 [2024-12-06 10:20:14.601994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.602134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.592 [2024-12-06 10:20:14.602148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:08.592 [2024-12-06 10:20:14.602157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:08.592 [2024-12-06 10:20:14.602168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.592 [2024-12-06 10:20:14.602216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.593 [2024-12-06 10:20:14.602227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:08.593 [2024-12-06 10:20:14.602237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:08.593 [2024-12-06 10:20:14.602246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.593 [2024-12-06 10:20:14.602268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:08.593 [2024-12-06 10:20:14.605930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.593 [2024-12-06 10:20:14.605960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:08.593 [2024-12-06 10:20:14.605972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.667 ms 00:21:08.593 [2024-12-06 10:20:14.605980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.593 [2024-12-06 10:20:14.606014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.593 [2024-12-06 10:20:14.606022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:08.593 [2024-12-06 10:20:14.606031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:08.593 [2024-12-06 10:20:14.606038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.593 [2024-12-06 10:20:14.606056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:08.593 [2024-12-06 10:20:14.606196] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:08.593 [2024-12-06 10:20:14.606211] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:08.593 [2024-12-06 10:20:14.606221] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:08.593 [2024-12-06 10:20:14.606232] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606241] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606250] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:08.593 [2024-12-06 10:20:14.606259] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:08.593 [2024-12-06 10:20:14.606269] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:08.593 [2024-12-06 10:20:14.606276] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:08.593 [2024-12-06 10:20:14.606285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.593 [2024-12-06 10:20:14.606297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:08.593 [2024-12-06 10:20:14.606306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:08.593 [2024-12-06 10:20:14.606313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.593 [2024-12-06 10:20:14.606398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.593 [2024-12-06 10:20:14.606406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:08.593 [2024-12-06 10:20:14.606414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:08.593 [2024-12-06 10:20:14.606423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.593 [2024-12-06 10:20:14.606551] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:08.593 [2024-12-06 10:20:14.606562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:08.593 [2024-12-06 10:20:14.606572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:08.593 [2024-12-06 10:20:14.606595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:08.593 [2024-12-06 10:20:14.606619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.593 [2024-12-06 10:20:14.606634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:08.593 [2024-12-06 10:20:14.606642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:08.593 [2024-12-06 10:20:14.606652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:08.593 [2024-12-06 10:20:14.606659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:08.593 [2024-12-06 10:20:14.606669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:08.593 [2024-12-06 10:20:14.606675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:08.593 [2024-12-06 10:20:14.606692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:08.593 [2024-12-06 10:20:14.606720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:08.593 [2024-12-06 10:20:14.606742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:08.593 [2024-12-06 10:20:14.606764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:08.593 [2024-12-06 10:20:14.606785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:08.593 [2024-12-06 10:20:14.606809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.593 [2024-12-06 10:20:14.606824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:08.593 [2024-12-06 10:20:14.606830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:08.593 [2024-12-06 10:20:14.606838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:08.593 [2024-12-06 10:20:14.606844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:08.593 [2024-12-06 10:20:14.606854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:08.593 [2024-12-06 10:20:14.606860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:08.593 [2024-12-06 10:20:14.606875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:08.593 [2024-12-06 10:20:14.606883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606889] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:08.593 [2024-12-06 10:20:14.606898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:08.593 [2024-12-06 10:20:14.606905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:08.593 [2024-12-06 10:20:14.606922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:08.593 [2024-12-06 10:20:14.606932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:08.593 [2024-12-06 10:20:14.606938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:08.593 [2024-12-06 10:20:14.606947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:08.593 [2024-12-06 10:20:14.606953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:08.593 [2024-12-06 10:20:14.606961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:08.593 [2024-12-06 10:20:14.606969] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:08.593 [2024-12-06 10:20:14.606982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.593 [2024-12-06 10:20:14.606990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:08.593 [2024-12-06 10:20:14.606999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:08.593 [2024-12-06 10:20:14.607006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:08.593 [2024-12-06 10:20:14.607015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:08.593 [2024-12-06 10:20:14.607022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:08.593 [2024-12-06 10:20:14.607031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:08.593 [2024-12-06 10:20:14.607038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:08.593 [2024-12-06 10:20:14.607046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:08.593 [2024-12-06 10:20:14.607054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:08.593 [2024-12-06 10:20:14.607065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:08.593 [2024-12-06 10:20:14.607072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:08.593 [2024-12-06 10:20:14.607080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:08.593 [2024-12-06 10:20:14.607087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:08.593 [2024-12-06 10:20:14.607096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:08.593 [2024-12-06 10:20:14.607102] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:08.594 [2024-12-06 10:20:14.607112] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:08.594 [2024-12-06 10:20:14.607120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:08.594 [2024-12-06 10:20:14.607129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:08.594 [2024-12-06 10:20:14.607136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:08.594 [2024-12-06 10:20:14.607144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:08.594 [2024-12-06 10:20:14.607151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:08.594 [2024-12-06 10:20:14.607160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:08.594 [2024-12-06 10:20:14.607168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:21:08.594 [2024-12-06 10:20:14.607179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:08.594 [2024-12-06 10:20:14.607221] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:08.594 [2024-12-06 10:20:14.607233] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:11.882 [2024-12-06 10:20:17.486947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.487011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:11.882 [2024-12-06 10:20:17.487025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2879.714 ms 00:21:11.882 [2024-12-06 10:20:17.487036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.512431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.512483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.882 [2024-12-06 10:20:17.512495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.206 ms 00:21:11.882 [2024-12-06 10:20:17.512505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.512620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.512632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.882 [2024-12-06 10:20:17.512643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:11.882 [2024-12-06 10:20:17.512654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.543088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.543125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.882 [2024-12-06 10:20:17.543136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.401 ms 00:21:11.882 [2024-12-06 10:20:17.543145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.543174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.543184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.882 [2024-12-06 10:20:17.543193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:11.882 [2024-12-06 10:20:17.543207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.543591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.543611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.882 [2024-12-06 10:20:17.543621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:21:11.882 [2024-12-06 10:20:17.543630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.543730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.543742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.882 [2024-12-06 10:20:17.543750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:11.882 [2024-12-06 10:20:17.543762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.557829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.557976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.882 [2024-12-06 10:20:17.557993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:21:11.882 [2024-12-06 10:20:17.558003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.582678] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:11.882 [2024-12-06 10:20:17.585376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.585407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.882 [2024-12-06 10:20:17.585422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.298 ms 00:21:11.882 [2024-12-06 10:20:17.585431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.661342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.661382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:11.882 [2024-12-06 10:20:17.661396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.856 ms 00:21:11.882 [2024-12-06 10:20:17.661404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.661599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.661610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.882 [2024-12-06 10:20:17.661623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:21:11.882 [2024-12-06 10:20:17.661631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.882 [2024-12-06 10:20:17.685257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.882 [2024-12-06 10:20:17.685288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:11.883 [2024-12-06 10:20:17.685302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.581 ms 00:21:11.883 [2024-12-06 10:20:17.685312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.708049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.708180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:11.883 [2024-12-06 10:20:17.708201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.698 ms 00:21:11.883 [2024-12-06 10:20:17.708208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.708775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.708791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.883 [2024-12-06 10:20:17.708804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:21:11.883 [2024-12-06 10:20:17.708811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.783042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.783171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:11.883 [2024-12-06 10:20:17.783194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.195 ms 00:21:11.883 [2024-12-06 10:20:17.783202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.808641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.808682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:11.883 [2024-12-06 10:20:17.808698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.084 ms 00:21:11.883 [2024-12-06 10:20:17.808706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.832353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.832388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:11.883 [2024-12-06 10:20:17.832400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.602 ms 00:21:11.883 [2024-12-06 10:20:17.832407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.856416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.856465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.883 [2024-12-06 10:20:17.856478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.943 ms 00:21:11.883 [2024-12-06 10:20:17.856486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.856525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.856534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.883 [2024-12-06 10:20:17.856546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.883 [2024-12-06 10:20:17.856554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.856639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.883 [2024-12-06 10:20:17.856651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.883 [2024-12-06 10:20:17.856661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:11.883 [2024-12-06 10:20:17.856668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.883 [2024-12-06 10:20:17.857487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3262.885 ms, result 0 00:21:11.883 { 00:21:11.883 "name": "ftl0", 00:21:11.883 "uuid": "02e574f2-a660-4801-8ddf-67882ec9f339" 00:21:11.883 } 00:21:11.883 10:20:17 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:11.883 10:20:17 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:12.141 10:20:18 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:12.141 10:20:18 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:12.141 [2024-12-06 10:20:18.265233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.141 [2024-12-06 10:20:18.265385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:12.141 [2024-12-06 10:20:18.265404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:12.141 [2024-12-06 10:20:18.265415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.141 [2024-12-06 10:20:18.265443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:12.141 [2024-12-06 10:20:18.268062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.141 [2024-12-06 10:20:18.268091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:12.141 [2024-12-06 10:20:18.268104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.585 ms 00:21:12.141 [2024-12-06 10:20:18.268113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.141 [2024-12-06 10:20:18.268383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.141 [2024-12-06 10:20:18.268393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:12.141 [2024-12-06 10:20:18.268403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:21:12.141 [2024-12-06 10:20:18.268411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.141 [2024-12-06 10:20:18.271735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.141 [2024-12-06 10:20:18.271807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:12.141 [2024-12-06 10:20:18.271856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.306 ms 00:21:12.141 [2024-12-06 10:20:18.271878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.142 [2024-12-06 10:20:18.278039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.142 [2024-12-06 10:20:18.278139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:12.142 [2024-12-06 10:20:18.278188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:21:12.142 [2024-12-06 10:20:18.278209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.142 [2024-12-06 10:20:18.302396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.142 [2024-12-06 10:20:18.302523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:12.142 [2024-12-06 10:20:18.302579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.110 ms 00:21:12.142 [2024-12-06 10:20:18.302590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.318484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.318518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:12.401 [2024-12-06 10:20:18.318532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.859 ms 00:21:12.401 [2024-12-06 10:20:18.318540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.318686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.318697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:12.401 [2024-12-06 10:20:18.318708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:12.401 [2024-12-06 10:20:18.318715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.341791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.341904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:12.401 [2024-12-06 10:20:18.341922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.054 ms 00:21:12.401 [2024-12-06 10:20:18.341929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.364963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.365064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:12.401 [2024-12-06 10:20:18.365081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.002 ms 00:21:12.401 [2024-12-06 10:20:18.365089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.388094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.388197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:12.401 [2024-12-06 10:20:18.388214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.972 ms 00:21:12.401 [2024-12-06 10:20:18.388221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.411418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.401 [2024-12-06 10:20:18.411533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:12.401 [2024-12-06 10:20:18.411552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:21:12.401 [2024-12-06 10:20:18.411559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.401 [2024-12-06 10:20:18.411588] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:12.401 [2024-12-06 10:20:18.411603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.411999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.412008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.412022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:12.401 [2024-12-06 10:20:18.412031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:12.402 [2024-12-06 10:20:18.412486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:12.402 [2024-12-06 10:20:18.412495] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02e574f2-a660-4801-8ddf-67882ec9f339 00:21:12.402 [2024-12-06 10:20:18.412505] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:12.402 [2024-12-06 10:20:18.412516] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:12.402 [2024-12-06 10:20:18.412523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:12.402 [2024-12-06 10:20:18.412532] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:12.402 [2024-12-06 10:20:18.412539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:12.402 [2024-12-06 10:20:18.412548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:12.402 [2024-12-06 10:20:18.412555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:12.402 [2024-12-06 10:20:18.412563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:12.402 [2024-12-06 10:20:18.412569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:12.402 [2024-12-06 10:20:18.412578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.402 [2024-12-06 10:20:18.412586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:12.402 [2024-12-06 10:20:18.412595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:21:12.402 [2024-12-06 10:20:18.412604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.425034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.402 [2024-12-06 10:20:18.425064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:12.402 [2024-12-06 10:20:18.425075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.396 ms 00:21:12.402 [2024-12-06 10:20:18.425083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.425432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.402 [2024-12-06 10:20:18.425441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:12.402 [2024-12-06 10:20:18.425470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:21:12.402 [2024-12-06 10:20:18.425477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.467013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.402 [2024-12-06 10:20:18.467135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:12.402 [2024-12-06 10:20:18.467154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.402 [2024-12-06 10:20:18.467162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.467220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.402 [2024-12-06 10:20:18.467228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:12.402 [2024-12-06 10:20:18.467239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.402 [2024-12-06 10:20:18.467246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.467329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.402 [2024-12-06 10:20:18.467339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:12.402 [2024-12-06 10:20:18.467348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.402 [2024-12-06 10:20:18.467355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.467375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.402 [2024-12-06 10:20:18.467383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:12.402 [2024-12-06 10:20:18.467392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.402 [2024-12-06 10:20:18.467401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.402 [2024-12-06 10:20:18.543227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.402 [2024-12-06 10:20:18.543263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.402 [2024-12-06 10:20:18.543276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.402 [2024-12-06 10:20:18.543283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.661 [2024-12-06 10:20:18.605424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.661 [2024-12-06 10:20:18.605536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.661 [2024-12-06 10:20:18.605642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.661 [2024-12-06 10:20:18.605755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:12.661 [2024-12-06 10:20:18.605819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.661 [2024-12-06 10:20:18.605882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.605933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.661 [2024-12-06 10:20:18.605942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.661 [2024-12-06 10:20:18.605951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.661 [2024-12-06 10:20:18.605958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.661 [2024-12-06 10:20:18.606081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 340.817 ms, result 0 00:21:12.661 true 00:21:12.661 10:20:18 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77325 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77325 ']' 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77325 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77325 00:21:12.661 killing process with pid 77325 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77325' 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77325 00:21:12.661 10:20:18 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77325 00:21:19.217 10:20:24 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:22.496 262144+0 records in 00:21:22.496 262144+0 records out 00:21:22.496 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.83574 s, 280 MB/s 00:21:22.496 10:20:28 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:24.397 10:20:30 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:24.397 [2024-12-06 10:20:30.146898] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:21:24.397 [2024-12-06 10:20:30.147118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77543 ] 00:21:24.397 [2024-12-06 10:20:30.301978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.397 [2024-12-06 10:20:30.397321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.655 [2024-12-06 10:20:30.653601] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.655 [2024-12-06 10:20:30.653660] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:24.655 [2024-12-06 10:20:30.810709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.655 [2024-12-06 10:20:30.810756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:24.655 [2024-12-06 10:20:30.810769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.655 [2024-12-06 10:20:30.810777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.656 [2024-12-06 10:20:30.810823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.656 [2024-12-06 10:20:30.810835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.656 [2024-12-06 10:20:30.810843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:24.656 [2024-12-06 10:20:30.810850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.656 [2024-12-06 10:20:30.810866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:24.656 [2024-12-06 10:20:30.811588] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:24.656 [2024-12-06 10:20:30.811623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.656 [2024-12-06 10:20:30.811630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.656 [2024-12-06 10:20:30.811639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.761 ms 00:21:24.656 [2024-12-06 10:20:30.811646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.656 [2024-12-06 10:20:30.812722] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:24.915 [2024-12-06 10:20:30.825461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.825495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:24.915 [2024-12-06 10:20:30.825506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.740 ms 00:21:24.915 [2024-12-06 10:20:30.825513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.825566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.825576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:24.915 [2024-12-06 10:20:30.825584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:24.915 [2024-12-06 10:20:30.825591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.830511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.830645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.915 [2024-12-06 10:20:30.830660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.870 ms 00:21:24.915 [2024-12-06 10:20:30.830672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.830739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.830747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.915 [2024-12-06 10:20:30.830755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:24.915 [2024-12-06 10:20:30.830762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.830809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.830819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:24.915 [2024-12-06 10:20:30.830827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.915 [2024-12-06 10:20:30.830834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.830857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:24.915 [2024-12-06 10:20:30.834098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.834207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.915 [2024-12-06 10:20:30.834226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.246 ms 00:21:24.915 [2024-12-06 10:20:30.834234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.834266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.915 [2024-12-06 10:20:30.834274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:24.915 [2024-12-06 10:20:30.834281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:24.915 [2024-12-06 10:20:30.834288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.915 [2024-12-06 10:20:30.834308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:24.916 [2024-12-06 10:20:30.834327] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:24.916 [2024-12-06 10:20:30.834361] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:24.916 [2024-12-06 10:20:30.834378] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:24.916 [2024-12-06 10:20:30.834496] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:24.916 [2024-12-06 10:20:30.834508] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:24.916 [2024-12-06 10:20:30.834518] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:24.916 [2024-12-06 10:20:30.834528] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834536] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834544] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:24.916 [2024-12-06 10:20:30.834551] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:24.916 [2024-12-06 10:20:30.834561] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:24.916 [2024-12-06 10:20:30.834568] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:24.916 [2024-12-06 10:20:30.834576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.916 [2024-12-06 10:20:30.834583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:24.916 [2024-12-06 10:20:30.834590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:21:24.916 [2024-12-06 10:20:30.834597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.916 [2024-12-06 10:20:30.834678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.916 [2024-12-06 10:20:30.834686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:24.916 [2024-12-06 10:20:30.834694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:24.916 [2024-12-06 10:20:30.834700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.916 [2024-12-06 10:20:30.834812] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:24.916 [2024-12-06 10:20:30.834822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:24.916 [2024-12-06 10:20:30.834830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:24.916 [2024-12-06 10:20:30.834852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:24.916 [2024-12-06 10:20:30.834873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.916 [2024-12-06 10:20:30.834888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:24.916 [2024-12-06 10:20:30.834895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:24.916 [2024-12-06 10:20:30.834902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:24.916 [2024-12-06 10:20:30.834913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:24.916 [2024-12-06 10:20:30.834921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:24.916 [2024-12-06 10:20:30.834927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:24.916 [2024-12-06 10:20:30.834940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:24.916 [2024-12-06 10:20:30.834960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:24.916 [2024-12-06 10:20:30.834979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:24.916 [2024-12-06 10:20:30.834986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.916 [2024-12-06 10:20:30.834992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:24.916 [2024-12-06 10:20:30.834998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.916 [2024-12-06 10:20:30.835010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:24.916 [2024-12-06 10:20:30.835017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:24.916 [2024-12-06 10:20:30.835031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:24.916 [2024-12-06 10:20:30.835037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.916 [2024-12-06 10:20:30.835049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:24.916 [2024-12-06 10:20:30.835055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:24.916 [2024-12-06 10:20:30.835062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:24.916 [2024-12-06 10:20:30.835068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:24.916 [2024-12-06 10:20:30.835074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:24.916 [2024-12-06 10:20:30.835080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:24.916 [2024-12-06 10:20:30.835093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:24.916 [2024-12-06 10:20:30.835100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835108] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:24.916 [2024-12-06 10:20:30.835115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:24.916 [2024-12-06 10:20:30.835123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:24.916 [2024-12-06 10:20:30.835130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:24.916 [2024-12-06 10:20:30.835138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:24.916 [2024-12-06 10:20:30.835145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:24.916 [2024-12-06 10:20:30.835151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:24.916 [2024-12-06 10:20:30.835158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:24.916 [2024-12-06 10:20:30.835165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:24.916 [2024-12-06 10:20:30.835171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:24.916 [2024-12-06 10:20:30.835179] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:24.916 [2024-12-06 10:20:30.835188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:24.916 [2024-12-06 10:20:30.835206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:24.916 [2024-12-06 10:20:30.835213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:24.916 [2024-12-06 10:20:30.835220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:24.916 [2024-12-06 10:20:30.835227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:24.916 [2024-12-06 10:20:30.835234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:24.916 [2024-12-06 10:20:30.835240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:24.916 [2024-12-06 10:20:30.835247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:24.916 [2024-12-06 10:20:30.835254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:24.916 [2024-12-06 10:20:30.835261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:24.916 [2024-12-06 10:20:30.835295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:24.916 [2024-12-06 10:20:30.835303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:24.916 [2024-12-06 10:20:30.835318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:24.916 [2024-12-06 10:20:30.835325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:24.916 [2024-12-06 10:20:30.835333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:24.916 [2024-12-06 10:20:30.835341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.835348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:24.917 [2024-12-06 10:20:30.835355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:21:24.917 [2024-12-06 10:20:30.835362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.861267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.861468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.917 [2024-12-06 10:20:30.861538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.862 ms 00:21:24.917 [2024-12-06 10:20:30.861568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.861665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.861722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:24.917 [2024-12-06 10:20:30.861746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:24.917 [2024-12-06 10:20:30.861765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.904028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.904168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.917 [2024-12-06 10:20:30.904226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.868 ms 00:21:24.917 [2024-12-06 10:20:30.904251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.904299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.904323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.917 [2024-12-06 10:20:30.904347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.917 [2024-12-06 10:20:30.904366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.904744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.904784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.917 [2024-12-06 10:20:30.904804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:21:24.917 [2024-12-06 10:20:30.904878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.905011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.905075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.917 [2024-12-06 10:20:30.905104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:24.917 [2024-12-06 10:20:30.905123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.918212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.918319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.917 [2024-12-06 10:20:30.918366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:21:24.917 [2024-12-06 10:20:30.918389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.931141] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:24.917 [2024-12-06 10:20:30.931266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:24.917 [2024-12-06 10:20:30.931324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.931344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:24.917 [2024-12-06 10:20:30.931363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.823 ms 00:21:24.917 [2024-12-06 10:20:30.931381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.955698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.955813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.917 [2024-12-06 10:20:30.955863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.276 ms 00:21:24.917 [2024-12-06 10:20:30.955885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.967881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.968012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.917 [2024-12-06 10:20:30.968084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.686 ms 00:21:24.917 [2024-12-06 10:20:30.968107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.979689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.979798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.917 [2024-12-06 10:20:30.979847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.540 ms 00:21:24.917 [2024-12-06 10:20:30.979869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:30.980809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:30.980932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.917 [2024-12-06 10:20:30.980987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:24.917 [2024-12-06 10:20:30.981015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.037568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.037700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.917 [2024-12-06 10:20:31.037753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.522 ms 00:21:24.917 [2024-12-06 10:20:31.037781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.048162] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.917 [2024-12-06 10:20:31.050362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.050468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.917 [2024-12-06 10:20:31.050516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.539 ms 00:21:24.917 [2024-12-06 10:20:31.050538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.050628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.050654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.917 [2024-12-06 10:20:31.050674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:24.917 [2024-12-06 10:20:31.050692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.050827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.050856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.917 [2024-12-06 10:20:31.050877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:24.917 [2024-12-06 10:20:31.050895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.050927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.050948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.917 [2024-12-06 10:20:31.051004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:24.917 [2024-12-06 10:20:31.051027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.051072] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.917 [2024-12-06 10:20:31.051097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.051117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.917 [2024-12-06 10:20:31.051136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:24.917 [2024-12-06 10:20:31.051154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.074817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.074931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.917 [2024-12-06 10:20:31.074981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.632 ms 00:21:24.917 [2024-12-06 10:20:31.075008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.075123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.917 [2024-12-06 10:20:31.075164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.917 [2024-12-06 10:20:31.075229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:24.917 [2024-12-06 10:20:31.075251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.917 [2024-12-06 10:20:31.076131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 265.011 ms, result 0 00:21:26.291  [2024-12-06T10:20:33.414Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-06T10:20:34.367Z] Copying: 30/1024 [MB] (15 MBps) [2024-12-06T10:20:35.299Z] Copying: 67/1024 [MB] (37 MBps) [2024-12-06T10:20:36.230Z] Copying: 97/1024 [MB] (30 MBps) [2024-12-06T10:20:37.162Z] Copying: 123/1024 [MB] (25 MBps) [2024-12-06T10:20:38.095Z] Copying: 145/1024 [MB] (21 MBps) [2024-12-06T10:20:39.472Z] Copying: 167/1024 [MB] (22 MBps) [2024-12-06T10:20:40.406Z] Copying: 195/1024 [MB] (27 MBps) [2024-12-06T10:20:41.340Z] Copying: 232/1024 [MB] (36 MBps) [2024-12-06T10:20:42.274Z] Copying: 261/1024 [MB] (28 MBps) [2024-12-06T10:20:43.207Z] Copying: 287/1024 [MB] (25 MBps) [2024-12-06T10:20:44.141Z] Copying: 310/1024 [MB] (23 MBps) [2024-12-06T10:20:45.515Z] Copying: 325/1024 [MB] (15 MBps) [2024-12-06T10:20:46.451Z] Copying: 340/1024 [MB] (15 MBps) [2024-12-06T10:20:47.396Z] Copying: 369/1024 [MB] (29 MBps) [2024-12-06T10:20:48.342Z] Copying: 390/1024 [MB] (20 MBps) [2024-12-06T10:20:49.286Z] Copying: 409/1024 [MB] (19 MBps) [2024-12-06T10:20:50.231Z] Copying: 424/1024 [MB] (14 MBps) [2024-12-06T10:20:51.173Z] Copying: 439/1024 [MB] (15 MBps) [2024-12-06T10:20:52.117Z] Copying: 450/1024 [MB] (10 MBps) [2024-12-06T10:20:53.509Z] Copying: 474/1024 [MB] (24 MBps) [2024-12-06T10:20:54.145Z] Copying: 489/1024 [MB] (14 MBps) [2024-12-06T10:20:55.530Z] Copying: 510/1024 [MB] (20 MBps) [2024-12-06T10:20:56.101Z] Copying: 526/1024 [MB] (16 MBps) [2024-12-06T10:20:57.483Z] Copying: 539/1024 [MB] (13 MBps) [2024-12-06T10:20:58.427Z] Copying: 551/1024 [MB] (11 MBps) [2024-12-06T10:20:59.368Z] Copying: 563/1024 [MB] (12 MBps) [2024-12-06T10:21:00.308Z] Copying: 578/1024 [MB] (14 MBps) [2024-12-06T10:21:01.252Z] Copying: 592/1024 [MB] (14 MBps) [2024-12-06T10:21:02.198Z] Copying: 606/1024 [MB] (14 MBps) [2024-12-06T10:21:03.143Z] Copying: 618/1024 [MB] (12 MBps) [2024-12-06T10:21:04.530Z] Copying: 630/1024 [MB] (12 MBps) [2024-12-06T10:21:05.100Z] Copying: 647/1024 [MB] (16 MBps) [2024-12-06T10:21:06.485Z] Copying: 663/1024 [MB] (16 MBps) [2024-12-06T10:21:07.425Z] Copying: 674/1024 [MB] (10 MBps) [2024-12-06T10:21:08.364Z] Copying: 684/1024 [MB] (10 MBps) [2024-12-06T10:21:09.308Z] Copying: 695/1024 [MB] (10 MBps) [2024-12-06T10:21:10.252Z] Copying: 705/1024 [MB] (10 MBps) [2024-12-06T10:21:11.195Z] Copying: 715/1024 [MB] (10 MBps) [2024-12-06T10:21:12.138Z] Copying: 726/1024 [MB] (10 MBps) [2024-12-06T10:21:13.526Z] Copying: 739/1024 [MB] (12 MBps) [2024-12-06T10:21:14.098Z] Copying: 776/1024 [MB] (37 MBps) [2024-12-06T10:21:15.490Z] Copying: 789/1024 [MB] (12 MBps) [2024-12-06T10:21:16.456Z] Copying: 806/1024 [MB] (16 MBps) [2024-12-06T10:21:17.399Z] Copying: 825/1024 [MB] (19 MBps) [2024-12-06T10:21:18.341Z] Copying: 844/1024 [MB] (18 MBps) [2024-12-06T10:21:19.288Z] Copying: 857/1024 [MB] (13 MBps) [2024-12-06T10:21:20.232Z] Copying: 877/1024 [MB] (19 MBps) [2024-12-06T10:21:21.171Z] Copying: 902/1024 [MB] (24 MBps) [2024-12-06T10:21:22.107Z] Copying: 922/1024 [MB] (20 MBps) [2024-12-06T10:21:23.492Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-06T10:21:24.435Z] Copying: 943/1024 [MB] (11 MBps) [2024-12-06T10:21:25.381Z] Copying: 961/1024 [MB] (17 MBps) [2024-12-06T10:21:26.320Z] Copying: 974/1024 [MB] (13 MBps) [2024-12-06T10:21:27.262Z] Copying: 985/1024 [MB] (10 MBps) [2024-12-06T10:21:28.206Z] Copying: 995/1024 [MB] (10 MBps) [2024-12-06T10:21:29.149Z] Copying: 1016/1024 [MB] (20 MBps) [2024-12-06T10:21:29.149Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-06 10:21:28.781888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.781950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:22.982 [2024-12-06 10:21:28.781967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:22.982 [2024-12-06 10:21:28.781976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.781999] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.982 [2024-12-06 10:21:28.785159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.785217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:22.982 [2024-12-06 10:21:28.785238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.143 ms 00:22:22.982 [2024-12-06 10:21:28.785246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.788302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.788517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:22.982 [2024-12-06 10:21:28.788541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:22:22.982 [2024-12-06 10:21:28.788551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.808586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.808642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:22.982 [2024-12-06 10:21:28.808656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.010 ms 00:22:22.982 [2024-12-06 10:21:28.808663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.814836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.814877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:22.982 [2024-12-06 10:21:28.814889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:22:22.982 [2024-12-06 10:21:28.814897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.842334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.842388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:22.982 [2024-12-06 10:21:28.842401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.375 ms 00:22:22.982 [2024-12-06 10:21:28.842409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.859140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.859190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:22.982 [2024-12-06 10:21:28.859203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.662 ms 00:22:22.982 [2024-12-06 10:21:28.859222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.859373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.859389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:22.982 [2024-12-06 10:21:28.859400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:22.982 [2024-12-06 10:21:28.859408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.886198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.886409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:22.982 [2024-12-06 10:21:28.886432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.774 ms 00:22:22.982 [2024-12-06 10:21:28.886439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.912742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.912936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:22.982 [2024-12-06 10:21:28.912956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.237 ms 00:22:22.982 [2024-12-06 10:21:28.912965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.938838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.938888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:22.982 [2024-12-06 10:21:28.938899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.805 ms 00:22:22.982 [2024-12-06 10:21:28.938906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.964245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.982 [2024-12-06 10:21:28.964297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:22.982 [2024-12-06 10:21:28.964308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.246 ms 00:22:22.982 [2024-12-06 10:21:28.964315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.982 [2024-12-06 10:21:28.964363] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:22.982 [2024-12-06 10:21:28.964380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:22.982 [2024-12-06 10:21:28.964399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:22.982 [2024-12-06 10:21:28.964407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.964995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:22.983 [2024-12-06 10:21:28.965101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:22.984 [2024-12-06 10:21:28.965185] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:22.984 [2024-12-06 10:21:28.965198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02e574f2-a660-4801-8ddf-67882ec9f339 00:22:22.984 [2024-12-06 10:21:28.965206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:22.984 [2024-12-06 10:21:28.965213] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:22.984 [2024-12-06 10:21:28.965220] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:22.984 [2024-12-06 10:21:28.965227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:22.984 [2024-12-06 10:21:28.965235] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:22.984 [2024-12-06 10:21:28.965250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:22.984 [2024-12-06 10:21:28.965258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:22.984 [2024-12-06 10:21:28.965264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:22.984 [2024-12-06 10:21:28.965270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:22.984 [2024-12-06 10:21:28.965277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.984 [2024-12-06 10:21:28.965285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:22.984 [2024-12-06 10:21:28.965295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:22:22.984 [2024-12-06 10:21:28.965303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:28.978975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.984 [2024-12-06 10:21:28.979171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:22.984 [2024-12-06 10:21:28.979189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.650 ms 00:22:22.984 [2024-12-06 10:21:28.979198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:28.979636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.984 [2024-12-06 10:21:28.979650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:22.984 [2024-12-06 10:21:28.979659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:22:22.984 [2024-12-06 10:21:28.979676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:29.016315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.984 [2024-12-06 10:21:29.016369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.984 [2024-12-06 10:21:29.016381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.984 [2024-12-06 10:21:29.016391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:29.016477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.984 [2024-12-06 10:21:29.016488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.984 [2024-12-06 10:21:29.016497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.984 [2024-12-06 10:21:29.016512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:29.016584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.984 [2024-12-06 10:21:29.016595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.984 [2024-12-06 10:21:29.016605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.984 [2024-12-06 10:21:29.016614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:29.016630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.984 [2024-12-06 10:21:29.016639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.984 [2024-12-06 10:21:29.016649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.984 [2024-12-06 10:21:29.016658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.984 [2024-12-06 10:21:29.101216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.984 [2024-12-06 10:21:29.101289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.984 [2024-12-06 10:21:29.101303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.984 [2024-12-06 10:21:29.101313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.170563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.170618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:23.246 [2024-12-06 10:21:29.170632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.170648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.170735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.170747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:23.246 [2024-12-06 10:21:29.170757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.170766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.170807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.170817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:23.246 [2024-12-06 10:21:29.170826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.170835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.170935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.170947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:23.246 [2024-12-06 10:21:29.170955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.170963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.170997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.171008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:23.246 [2024-12-06 10:21:29.171016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.171026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.171067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.171081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:23.246 [2024-12-06 10:21:29.171090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.171099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.171147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:23.246 [2024-12-06 10:21:29.171158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:23.246 [2024-12-06 10:21:29.171167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:23.246 [2024-12-06 10:21:29.171176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:23.246 [2024-12-06 10:21:29.171314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.385 ms, result 0 00:22:23.818 00:22:23.818 00:22:23.818 10:21:29 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:24.079 [2024-12-06 10:21:30.046148] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:22:24.079 [2024-12-06 10:21:30.046613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78155 ] 00:22:24.079 [2024-12-06 10:21:30.207604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.341 [2024-12-06 10:21:30.328524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.605 [2024-12-06 10:21:30.627869] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.605 [2024-12-06 10:21:30.627964] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.891 [2024-12-06 10:21:30.789626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.789901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.891 [2024-12-06 10:21:30.789928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:24.891 [2024-12-06 10:21:30.789939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.790014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.790028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.891 [2024-12-06 10:21:30.790037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:24.891 [2024-12-06 10:21:30.790045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.790069] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.891 [2024-12-06 10:21:30.791171] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.891 [2024-12-06 10:21:30.791238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.791249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.891 [2024-12-06 10:21:30.791260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.175 ms 00:22:24.891 [2024-12-06 10:21:30.791269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.793025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:24.891 [2024-12-06 10:21:30.807949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.808002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:24.891 [2024-12-06 10:21:30.808016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.927 ms 00:22:24.891 [2024-12-06 10:21:30.808026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.808146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.808158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:24.891 [2024-12-06 10:21:30.808168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:24.891 [2024-12-06 10:21:30.808176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.816943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.817123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.891 [2024-12-06 10:21:30.817141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.683 ms 00:22:24.891 [2024-12-06 10:21:30.817157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.817244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.817254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.891 [2024-12-06 10:21:30.817264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:24.891 [2024-12-06 10:21:30.817271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.817320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.817331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.891 [2024-12-06 10:21:30.817340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:24.891 [2024-12-06 10:21:30.817348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.817375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.891 [2024-12-06 10:21:30.821508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.821552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.891 [2024-12-06 10:21:30.821567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.139 ms 00:22:24.891 [2024-12-06 10:21:30.821575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.821617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.891 [2024-12-06 10:21:30.821626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.891 [2024-12-06 10:21:30.821635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:24.891 [2024-12-06 10:21:30.821644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.891 [2024-12-06 10:21:30.821699] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:24.891 [2024-12-06 10:21:30.821726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:24.891 [2024-12-06 10:21:30.821763] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:24.891 [2024-12-06 10:21:30.821782] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:24.891 [2024-12-06 10:21:30.821889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.891 [2024-12-06 10:21:30.821901] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.891 [2024-12-06 10:21:30.821912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.891 [2024-12-06 10:21:30.821922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.892 [2024-12-06 10:21:30.821932] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.892 [2024-12-06 10:21:30.821940] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.892 [2024-12-06 10:21:30.821948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.892 [2024-12-06 10:21:30.821959] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.892 [2024-12-06 10:21:30.821968] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.892 [2024-12-06 10:21:30.821976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.892 [2024-12-06 10:21:30.821985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.892 [2024-12-06 10:21:30.821993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:24.892 [2024-12-06 10:21:30.822001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.892 [2024-12-06 10:21:30.822085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.892 [2024-12-06 10:21:30.822102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.892 [2024-12-06 10:21:30.822110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:24.892 [2024-12-06 10:21:30.822117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.892 [2024-12-06 10:21:30.822225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.892 [2024-12-06 10:21:30.822236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.892 [2024-12-06 10:21:30.822245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.892 [2024-12-06 10:21:30.822268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.892 [2024-12-06 10:21:30.822290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.892 [2024-12-06 10:21:30.822305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.892 [2024-12-06 10:21:30.822312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.892 [2024-12-06 10:21:30.822321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.892 [2024-12-06 10:21:30.822335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.892 [2024-12-06 10:21:30.822343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.892 [2024-12-06 10:21:30.822350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.892 [2024-12-06 10:21:30.822365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.892 [2024-12-06 10:21:30.822386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.892 [2024-12-06 10:21:30.822408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.892 [2024-12-06 10:21:30.822428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.892 [2024-12-06 10:21:30.822489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.892 [2024-12-06 10:21:30.822510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.892 [2024-12-06 10:21:30.822526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.892 [2024-12-06 10:21:30.822533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.892 [2024-12-06 10:21:30.822541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.892 [2024-12-06 10:21:30.822548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.892 [2024-12-06 10:21:30.822555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.892 [2024-12-06 10:21:30.822562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.892 [2024-12-06 10:21:30.822576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.892 [2024-12-06 10:21:30.822583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.892 [2024-12-06 10:21:30.822601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.892 [2024-12-06 10:21:30.822610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.892 [2024-12-06 10:21:30.822649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.892 [2024-12-06 10:21:30.822657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.892 [2024-12-06 10:21:30.822664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.892 [2024-12-06 10:21:30.822672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.892 [2024-12-06 10:21:30.822679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.892 [2024-12-06 10:21:30.822687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.892 [2024-12-06 10:21:30.822696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.892 [2024-12-06 10:21:30.822706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.892 [2024-12-06 10:21:30.822726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.892 [2024-12-06 10:21:30.822734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.892 [2024-12-06 10:21:30.822741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.892 [2024-12-06 10:21:30.822748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.892 [2024-12-06 10:21:30.822756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.892 [2024-12-06 10:21:30.822763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.892 [2024-12-06 10:21:30.822772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.892 [2024-12-06 10:21:30.822779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.892 [2024-12-06 10:21:30.822788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822819] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.892 [2024-12-06 10:21:30.822827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.892 [2024-12-06 10:21:30.822836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.892 [2024-12-06 10:21:30.822863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.892 [2024-12-06 10:21:30.822870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.892 [2024-12-06 10:21:30.822878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.892 [2024-12-06 10:21:30.822885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.892 [2024-12-06 10:21:30.822894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.892 [2024-12-06 10:21:30.822903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:22:24.892 [2024-12-06 10:21:30.822910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.892 [2024-12-06 10:21:30.855112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.892 [2024-12-06 10:21:30.855169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.892 [2024-12-06 10:21:30.855181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.151 ms 00:22:24.892 [2024-12-06 10:21:30.855194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.892 [2024-12-06 10:21:30.855287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.892 [2024-12-06 10:21:30.855296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:24.892 [2024-12-06 10:21:30.855306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:24.893 [2024-12-06 10:21:30.855314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.900619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.900676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.893 [2024-12-06 10:21:30.900690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.240 ms 00:22:24.893 [2024-12-06 10:21:30.900700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.900752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.900762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.893 [2024-12-06 10:21:30.900775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:24.893 [2024-12-06 10:21:30.900783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.901356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.901393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.893 [2024-12-06 10:21:30.901404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:22:24.893 [2024-12-06 10:21:30.901412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.901594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.901612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.893 [2024-12-06 10:21:30.901628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:22:24.893 [2024-12-06 10:21:30.901637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.917601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.917653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.893 [2024-12-06 10:21:30.917665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.942 ms 00:22:24.893 [2024-12-06 10:21:30.917673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.932139] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:24.893 [2024-12-06 10:21:30.932194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:24.893 [2024-12-06 10:21:30.932209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.932218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:24.893 [2024-12-06 10:21:30.932228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.422 ms 00:22:24.893 [2024-12-06 10:21:30.932235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.958431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.958654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:24.893 [2024-12-06 10:21:30.958677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:22:24.893 [2024-12-06 10:21:30.958686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.972180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.972245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:24.893 [2024-12-06 10:21:30.972257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.432 ms 00:22:24.893 [2024-12-06 10:21:30.972266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.985203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.985254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:24.893 [2024-12-06 10:21:30.985266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.885 ms 00:22:24.893 [2024-12-06 10:21:30.985273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.893 [2024-12-06 10:21:30.985967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.893 [2024-12-06 10:21:30.985995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:24.893 [2024-12-06 10:21:30.986008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:22:24.893 [2024-12-06 10:21:30.986016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.052713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.052974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:25.161 [2024-12-06 10:21:31.053008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.674 ms 00:22:25.161 [2024-12-06 10:21:31.053018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.064864] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:25.161 [2024-12-06 10:21:31.068015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.068224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:25.161 [2024-12-06 10:21:31.068248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.649 ms 00:22:25.161 [2024-12-06 10:21:31.068259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.068366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.068379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:25.161 [2024-12-06 10:21:31.068393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:25.161 [2024-12-06 10:21:31.068402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.068502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.068515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:25.161 [2024-12-06 10:21:31.068524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:22:25.161 [2024-12-06 10:21:31.068532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.068554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.068563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:25.161 [2024-12-06 10:21:31.068572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:25.161 [2024-12-06 10:21:31.068581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.068621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:25.161 [2024-12-06 10:21:31.068632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.068641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:25.161 [2024-12-06 10:21:31.068650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:25.161 [2024-12-06 10:21:31.068659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.095696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.095753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:25.161 [2024-12-06 10:21:31.095774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.018 ms 00:22:25.161 [2024-12-06 10:21:31.095783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.095873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.161 [2024-12-06 10:21:31.095885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:25.161 [2024-12-06 10:21:31.095895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:25.161 [2024-12-06 10:21:31.095903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.161 [2024-12-06 10:21:31.097841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.689 ms, result 0 00:22:26.547  [2024-12-06T10:21:33.288Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-06T10:21:34.672Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-06T10:21:35.614Z] Copying: 36/1024 [MB] (15 MBps) [2024-12-06T10:21:36.547Z] Copying: 60/1024 [MB] (23 MBps) [2024-12-06T10:21:37.480Z] Copying: 82/1024 [MB] (22 MBps) [2024-12-06T10:21:38.414Z] Copying: 103/1024 [MB] (21 MBps) [2024-12-06T10:21:39.385Z] Copying: 128/1024 [MB] (25 MBps) [2024-12-06T10:21:40.328Z] Copying: 147/1024 [MB] (18 MBps) [2024-12-06T10:21:41.714Z] Copying: 162/1024 [MB] (14 MBps) [2024-12-06T10:21:42.309Z] Copying: 183/1024 [MB] (20 MBps) [2024-12-06T10:21:43.699Z] Copying: 202/1024 [MB] (19 MBps) [2024-12-06T10:21:44.643Z] Copying: 213/1024 [MB] (10 MBps) [2024-12-06T10:21:45.585Z] Copying: 223/1024 [MB] (10 MBps) [2024-12-06T10:21:46.528Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-06T10:21:47.472Z] Copying: 246/1024 [MB] (10 MBps) [2024-12-06T10:21:48.416Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-06T10:21:49.363Z] Copying: 273/1024 [MB] (16 MBps) [2024-12-06T10:21:50.308Z] Copying: 287/1024 [MB] (14 MBps) [2024-12-06T10:21:51.714Z] Copying: 301/1024 [MB] (13 MBps) [2024-12-06T10:21:52.286Z] Copying: 313/1024 [MB] (11 MBps) [2024-12-06T10:21:53.671Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-06T10:21:54.615Z] Copying: 334/1024 [MB] (10 MBps) [2024-12-06T10:21:55.569Z] Copying: 349/1024 [MB] (14 MBps) [2024-12-06T10:21:56.512Z] Copying: 372/1024 [MB] (23 MBps) [2024-12-06T10:21:57.455Z] Copying: 390/1024 [MB] (17 MBps) [2024-12-06T10:21:58.399Z] Copying: 409/1024 [MB] (19 MBps) [2024-12-06T10:21:59.341Z] Copying: 420/1024 [MB] (10 MBps) [2024-12-06T10:22:00.284Z] Copying: 437/1024 [MB] (17 MBps) [2024-12-06T10:22:01.670Z] Copying: 448/1024 [MB] (10 MBps) [2024-12-06T10:22:02.610Z] Copying: 458/1024 [MB] (10 MBps) [2024-12-06T10:22:03.552Z] Copying: 469/1024 [MB] (11 MBps) [2024-12-06T10:22:04.495Z] Copying: 486/1024 [MB] (17 MBps) [2024-12-06T10:22:05.436Z] Copying: 499/1024 [MB] (12 MBps) [2024-12-06T10:22:06.376Z] Copying: 512/1024 [MB] (13 MBps) [2024-12-06T10:22:07.320Z] Copying: 523/1024 [MB] (10 MBps) [2024-12-06T10:22:08.709Z] Copying: 538/1024 [MB] (14 MBps) [2024-12-06T10:22:09.653Z] Copying: 549/1024 [MB] (10 MBps) [2024-12-06T10:22:10.600Z] Copying: 559/1024 [MB] (10 MBps) [2024-12-06T10:22:11.544Z] Copying: 570/1024 [MB] (10 MBps) [2024-12-06T10:22:12.488Z] Copying: 580/1024 [MB] (10 MBps) [2024-12-06T10:22:13.433Z] Copying: 590/1024 [MB] (10 MBps) [2024-12-06T10:22:14.378Z] Copying: 601/1024 [MB] (11 MBps) [2024-12-06T10:22:15.323Z] Copying: 614/1024 [MB] (12 MBps) [2024-12-06T10:22:16.709Z] Copying: 633/1024 [MB] (19 MBps) [2024-12-06T10:22:17.653Z] Copying: 650/1024 [MB] (17 MBps) [2024-12-06T10:22:18.597Z] Copying: 677/1024 [MB] (27 MBps) [2024-12-06T10:22:19.542Z] Copying: 694/1024 [MB] (16 MBps) [2024-12-06T10:22:20.484Z] Copying: 714/1024 [MB] (19 MBps) [2024-12-06T10:22:21.427Z] Copying: 737/1024 [MB] (22 MBps) [2024-12-06T10:22:22.371Z] Copying: 757/1024 [MB] (20 MBps) [2024-12-06T10:22:23.316Z] Copying: 777/1024 [MB] (19 MBps) [2024-12-06T10:22:24.711Z] Copying: 796/1024 [MB] (19 MBps) [2024-12-06T10:22:25.653Z] Copying: 814/1024 [MB] (17 MBps) [2024-12-06T10:22:26.623Z] Copying: 828/1024 [MB] (13 MBps) [2024-12-06T10:22:27.646Z] Copying: 839/1024 [MB] (11 MBps) [2024-12-06T10:22:28.597Z] Copying: 854/1024 [MB] (15 MBps) [2024-12-06T10:22:29.540Z] Copying: 868/1024 [MB] (13 MBps) [2024-12-06T10:22:30.484Z] Copying: 885/1024 [MB] (16 MBps) [2024-12-06T10:22:31.428Z] Copying: 899/1024 [MB] (14 MBps) [2024-12-06T10:22:32.373Z] Copying: 910/1024 [MB] (10 MBps) [2024-12-06T10:22:33.319Z] Copying: 921/1024 [MB] (11 MBps) [2024-12-06T10:22:34.708Z] Copying: 932/1024 [MB] (10 MBps) [2024-12-06T10:22:35.649Z] Copying: 946/1024 [MB] (14 MBps) [2024-12-06T10:22:36.590Z] Copying: 964/1024 [MB] (17 MBps) [2024-12-06T10:22:37.532Z] Copying: 982/1024 [MB] (17 MBps) [2024-12-06T10:22:38.476Z] Copying: 999/1024 [MB] (17 MBps) [2024-12-06T10:22:38.737Z] Copying: 1020/1024 [MB] (20 MBps) [2024-12-06T10:22:38.999Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-06 10:22:38.752264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.752373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.832 [2024-12-06 10:22:38.752399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.832 [2024-12-06 10:22:38.752415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.832 [2024-12-06 10:22:38.752477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.832 [2024-12-06 10:22:38.759754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.759834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.832 [2024-12-06 10:22:38.759856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.246 ms 00:23:32.832 [2024-12-06 10:22:38.759873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.832 [2024-12-06 10:22:38.760411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.760463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.832 [2024-12-06 10:22:38.760485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:23:32.832 [2024-12-06 10:22:38.760503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.832 [2024-12-06 10:22:38.764760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.764966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.832 [2024-12-06 10:22:38.764987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:23:32.832 [2024-12-06 10:22:38.765005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.832 [2024-12-06 10:22:38.771308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.771350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:32.832 [2024-12-06 10:22:38.771362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.271 ms 00:23:32.832 [2024-12-06 10:22:38.771371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.832 [2024-12-06 10:22:38.798076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.832 [2024-12-06 10:22:38.798128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:32.832 [2024-12-06 10:22:38.798142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.608 ms 00:23:32.833 [2024-12-06 10:22:38.798150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.814191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.814380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:32.833 [2024-12-06 10:22:38.814402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.995 ms 00:23:32.833 [2024-12-06 10:22:38.814411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.814937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.814975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.833 [2024-12-06 10:22:38.814989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:23:32.833 [2024-12-06 10:22:38.814997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.840624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.840813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:32.833 [2024-12-06 10:22:38.840835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.609 ms 00:23:32.833 [2024-12-06 10:22:38.840844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.865592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.865637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:32.833 [2024-12-06 10:22:38.865650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.643 ms 00:23:32.833 [2024-12-06 10:22:38.865658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.890127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.890172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:32.833 [2024-12-06 10:22:38.890184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.425 ms 00:23:32.833 [2024-12-06 10:22:38.890191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.914924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.833 [2024-12-06 10:22:38.914967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:32.833 [2024-12-06 10:22:38.914979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.662 ms 00:23:32.833 [2024-12-06 10:22:38.914986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.833 [2024-12-06 10:22:38.915028] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:32.833 [2024-12-06 10:22:38.915051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:32.833 [2024-12-06 10:22:38.915599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:32.834 [2024-12-06 10:22:38.915916] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:32.834 [2024-12-06 10:22:38.915924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02e574f2-a660-4801-8ddf-67882ec9f339 00:23:32.834 [2024-12-06 10:22:38.915933] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:32.834 [2024-12-06 10:22:38.915940] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:32.834 [2024-12-06 10:22:38.915949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:32.834 [2024-12-06 10:22:38.915957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:32.834 [2024-12-06 10:22:38.915971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:32.834 [2024-12-06 10:22:38.915979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:32.834 [2024-12-06 10:22:38.915987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:32.834 [2024-12-06 10:22:38.915993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:32.834 [2024-12-06 10:22:38.916000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:32.834 [2024-12-06 10:22:38.916008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.834 [2024-12-06 10:22:38.916016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:32.834 [2024-12-06 10:22:38.916025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:23:32.834 [2024-12-06 10:22:38.916035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.929614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.834 [2024-12-06 10:22:38.929656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:32.834 [2024-12-06 10:22:38.929668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.559 ms 00:23:32.834 [2024-12-06 10:22:38.929677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.930073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.834 [2024-12-06 10:22:38.930083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:32.834 [2024-12-06 10:22:38.930099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:23:32.834 [2024-12-06 10:22:38.930107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.966376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.834 [2024-12-06 10:22:38.966426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.834 [2024-12-06 10:22:38.966438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.834 [2024-12-06 10:22:38.966466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.966535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.834 [2024-12-06 10:22:38.966546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.834 [2024-12-06 10:22:38.966561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.834 [2024-12-06 10:22:38.966570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.966679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.834 [2024-12-06 10:22:38.966692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.834 [2024-12-06 10:22:38.966702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.834 [2024-12-06 10:22:38.966711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.834 [2024-12-06 10:22:38.966728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.834 [2024-12-06 10:22:38.966739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.834 [2024-12-06 10:22:38.966748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.834 [2024-12-06 10:22:38.966761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.095 [2024-12-06 10:22:39.053120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.095 [2024-12-06 10:22:39.053178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.095 [2024-12-06 10:22:39.053192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.095 [2024-12-06 10:22:39.053201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.095 [2024-12-06 10:22:39.122666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.095 [2024-12-06 10:22:39.122719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.095 [2024-12-06 10:22:39.122736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.095 [2024-12-06 10:22:39.122746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.095 [2024-12-06 10:22:39.122805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.095 [2024-12-06 10:22:39.122815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.095 [2024-12-06 10:22:39.122824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.122832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.122889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.096 [2024-12-06 10:22:39.122900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.096 [2024-12-06 10:22:39.122909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.122917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.123022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.096 [2024-12-06 10:22:39.123033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.096 [2024-12-06 10:22:39.123042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.123050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.123084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.096 [2024-12-06 10:22:39.123094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.096 [2024-12-06 10:22:39.123102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.123111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.123158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.096 [2024-12-06 10:22:39.123167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.096 [2024-12-06 10:22:39.123177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.123185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.123235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.096 [2024-12-06 10:22:39.123246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.096 [2024-12-06 10:22:39.123254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.096 [2024-12-06 10:22:39.123263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.096 [2024-12-06 10:22:39.123403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.121 ms, result 0 00:23:34.038 00:23:34.038 00:23:34.038 10:22:39 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:35.952 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:35.953 10:22:41 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:35.953 [2024-12-06 10:22:42.040980] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:23:35.953 [2024-12-06 10:22:42.041099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78889 ] 00:23:36.214 [2024-12-06 10:22:42.201786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.214 [2024-12-06 10:22:42.305220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.475 [2024-12-06 10:22:42.598400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.475 [2024-12-06 10:22:42.598513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.737 [2024-12-06 10:22:42.759172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.759238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.737 [2024-12-06 10:22:42.759253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:36.737 [2024-12-06 10:22:42.759261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.737 [2024-12-06 10:22:42.759316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.759329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.737 [2024-12-06 10:22:42.759338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:36.737 [2024-12-06 10:22:42.759347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.737 [2024-12-06 10:22:42.759367] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.737 [2024-12-06 10:22:42.760551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.737 [2024-12-06 10:22:42.760604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.760615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.737 [2024-12-06 10:22:42.760626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.241 ms 00:23:36.737 [2024-12-06 10:22:42.760634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.737 [2024-12-06 10:22:42.762300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.737 [2024-12-06 10:22:42.776262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.776314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.737 [2024-12-06 10:22:42.776327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.964 ms 00:23:36.737 [2024-12-06 10:22:42.776336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.737 [2024-12-06 10:22:42.776417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.776427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.737 [2024-12-06 10:22:42.776436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:36.737 [2024-12-06 10:22:42.776463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.737 [2024-12-06 10:22:42.784319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.737 [2024-12-06 10:22:42.784359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.737 [2024-12-06 10:22:42.784370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.777 ms 00:23:36.737 [2024-12-06 10:22:42.784385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.784487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.784498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.738 [2024-12-06 10:22:42.784508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:36.738 [2024-12-06 10:22:42.784517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.784560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.784569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.738 [2024-12-06 10:22:42.784578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.738 [2024-12-06 10:22:42.784586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.784614] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.738 [2024-12-06 10:22:42.788626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.788663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.738 [2024-12-06 10:22:42.788677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:23:36.738 [2024-12-06 10:22:42.788684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.788722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.788730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.738 [2024-12-06 10:22:42.788739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:36.738 [2024-12-06 10:22:42.788747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.788796] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.738 [2024-12-06 10:22:42.788820] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.738 [2024-12-06 10:22:42.788858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.738 [2024-12-06 10:22:42.788877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.738 [2024-12-06 10:22:42.788983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.738 [2024-12-06 10:22:42.788994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.738 [2024-12-06 10:22:42.789006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.738 [2024-12-06 10:22:42.789017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789026] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789034] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:36.738 [2024-12-06 10:22:42.789043] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.738 [2024-12-06 10:22:42.789053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.738 [2024-12-06 10:22:42.789062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.738 [2024-12-06 10:22:42.789070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.789077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.738 [2024-12-06 10:22:42.789085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:23:36.738 [2024-12-06 10:22:42.789093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.789183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.738 [2024-12-06 10:22:42.789192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.738 [2024-12-06 10:22:42.789200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:36.738 [2024-12-06 10:22:42.789207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.738 [2024-12-06 10:22:42.789313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.738 [2024-12-06 10:22:42.789324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.738 [2024-12-06 10:22:42.789333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.738 [2024-12-06 10:22:42.789356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.738 [2024-12-06 10:22:42.789379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.738 [2024-12-06 10:22:42.789393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.738 [2024-12-06 10:22:42.789400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:36.738 [2024-12-06 10:22:42.789407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.738 [2024-12-06 10:22:42.789420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.738 [2024-12-06 10:22:42.789426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:36.738 [2024-12-06 10:22:42.789436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.738 [2024-12-06 10:22:42.789483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.738 [2024-12-06 10:22:42.789504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.738 [2024-12-06 10:22:42.789526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.738 [2024-12-06 10:22:42.789547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.738 [2024-12-06 10:22:42.789588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.738 [2024-12-06 10:22:42.789610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.738 [2024-12-06 10:22:42.789625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.738 [2024-12-06 10:22:42.789631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:36.738 [2024-12-06 10:22:42.789638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.738 [2024-12-06 10:22:42.789646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.738 [2024-12-06 10:22:42.789653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:36.738 [2024-12-06 10:22:42.789660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.738 [2024-12-06 10:22:42.789673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:36.738 [2024-12-06 10:22:42.789681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789689] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.738 [2024-12-06 10:22:42.789697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.738 [2024-12-06 10:22:42.789704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.738 [2024-12-06 10:22:42.789721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.738 [2024-12-06 10:22:42.789728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.738 [2024-12-06 10:22:42.789736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.738 [2024-12-06 10:22:42.789743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.738 [2024-12-06 10:22:42.789750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.738 [2024-12-06 10:22:42.789758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.738 [2024-12-06 10:22:42.789767] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.738 [2024-12-06 10:22:42.789788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.738 [2024-12-06 10:22:42.789802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:36.738 [2024-12-06 10:22:42.789810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:36.738 [2024-12-06 10:22:42.789817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:36.738 [2024-12-06 10:22:42.789825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:36.738 [2024-12-06 10:22:42.789832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:36.738 [2024-12-06 10:22:42.789840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:36.738 [2024-12-06 10:22:42.789848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:36.738 [2024-12-06 10:22:42.789855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:36.738 [2024-12-06 10:22:42.789862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:36.739 [2024-12-06 10:22:42.789870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:36.739 [2024-12-06 10:22:42.789907] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.739 [2024-12-06 10:22:42.789916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.739 [2024-12-06 10:22:42.789931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.739 [2024-12-06 10:22:42.789938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.739 [2024-12-06 10:22:42.789946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.739 [2024-12-06 10:22:42.789953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.789960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.739 [2024-12-06 10:22:42.789968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:23:36.739 [2024-12-06 10:22:42.789975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.821514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.821559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.739 [2024-12-06 10:22:42.821571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.492 ms 00:23:36.739 [2024-12-06 10:22:42.821582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.821671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.821679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.739 [2024-12-06 10:22:42.821688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:36.739 [2024-12-06 10:22:42.821695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.870248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.870300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.739 [2024-12-06 10:22:42.870313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.493 ms 00:23:36.739 [2024-12-06 10:22:42.870322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.870370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.870380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.739 [2024-12-06 10:22:42.870393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:36.739 [2024-12-06 10:22:42.870401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.871020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.871057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.739 [2024-12-06 10:22:42.871068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:36.739 [2024-12-06 10:22:42.871077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.871237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.871256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.739 [2024-12-06 10:22:42.871271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:36.739 [2024-12-06 10:22:42.871279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.739 [2024-12-06 10:22:42.886802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.739 [2024-12-06 10:22:42.886847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.739 [2024-12-06 10:22:42.886858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:23:36.739 [2024-12-06 10:22:42.886866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.000 [2024-12-06 10:22:42.901114] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:37.000 [2024-12-06 10:22:42.901316] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:37.000 [2024-12-06 10:22:42.901336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.000 [2024-12-06 10:22:42.901344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:37.000 [2024-12-06 10:22:42.901354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.363 ms 00:23:37.000 [2024-12-06 10:22:42.901362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.000 [2024-12-06 10:22:42.926950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.000 [2024-12-06 10:22:42.927000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:37.000 [2024-12-06 10:22:42.927012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.448 ms 00:23:37.001 [2024-12-06 10:22:42.927021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:42.939853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:42.939911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:37.001 [2024-12-06 10:22:42.939923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:23:37.001 [2024-12-06 10:22:42.939930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:42.952149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:42.952194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:37.001 [2024-12-06 10:22:42.952206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.174 ms 00:23:37.001 [2024-12-06 10:22:42.952213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:42.952871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:42.952895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:37.001 [2024-12-06 10:22:42.952908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:23:37.001 [2024-12-06 10:22:42.952917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.017597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.017661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:37.001 [2024-12-06 10:22:43.017683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.660 ms 00:23:37.001 [2024-12-06 10:22:43.017693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.028749] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:37.001 [2024-12-06 10:22:43.031708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.031748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.001 [2024-12-06 10:22:43.031760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.960 ms 00:23:37.001 [2024-12-06 10:22:43.031768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.031854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.031865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:37.001 [2024-12-06 10:22:43.031877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:37.001 [2024-12-06 10:22:43.031886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.031960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.031970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.001 [2024-12-06 10:22:43.031979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:37.001 [2024-12-06 10:22:43.031988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.032009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.032019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.001 [2024-12-06 10:22:43.032027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:37.001 [2024-12-06 10:22:43.032036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.032088] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:37.001 [2024-12-06 10:22:43.032100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.032108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:37.001 [2024-12-06 10:22:43.032117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:37.001 [2024-12-06 10:22:43.032125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.057763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.057812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.001 [2024-12-06 10:22:43.057831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.619 ms 00:23:37.001 [2024-12-06 10:22:43.057839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.057924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.001 [2024-12-06 10:22:43.057934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.001 [2024-12-06 10:22:43.057943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:37.001 [2024-12-06 10:22:43.057952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.001 [2024-12-06 10:22:43.059222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.572 ms, result 0 00:23:37.942  [2024-12-06T10:22:45.491Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-06T10:22:46.467Z] Copying: 26/1024 [MB] (14 MBps) [2024-12-06T10:22:47.403Z] Copying: 43/1024 [MB] (16 MBps) [2024-12-06T10:22:48.341Z] Copying: 57/1024 [MB] (14 MBps) [2024-12-06T10:22:49.282Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-06T10:22:50.227Z] Copying: 87/1024 [MB] (19 MBps) [2024-12-06T10:22:51.169Z] Copying: 99/1024 [MB] (11 MBps) [2024-12-06T10:22:52.186Z] Copying: 128/1024 [MB] (29 MBps) [2024-12-06T10:22:53.153Z] Copying: 170/1024 [MB] (42 MBps) [2024-12-06T10:22:54.097Z] Copying: 214/1024 [MB] (43 MBps) [2024-12-06T10:22:55.483Z] Copying: 239/1024 [MB] (25 MBps) [2024-12-06T10:22:56.426Z] Copying: 249/1024 [MB] (10 MBps) [2024-12-06T10:22:57.371Z] Copying: 266/1024 [MB] (17 MBps) [2024-12-06T10:22:58.332Z] Copying: 282/1024 [MB] (16 MBps) [2024-12-06T10:22:59.275Z] Copying: 303/1024 [MB] (20 MBps) [2024-12-06T10:23:00.219Z] Copying: 329/1024 [MB] (26 MBps) [2024-12-06T10:23:01.163Z] Copying: 378/1024 [MB] (48 MBps) [2024-12-06T10:23:02.107Z] Copying: 418/1024 [MB] (39 MBps) [2024-12-06T10:23:03.496Z] Copying: 432/1024 [MB] (14 MBps) [2024-12-06T10:23:04.441Z] Copying: 445/1024 [MB] (13 MBps) [2024-12-06T10:23:05.383Z] Copying: 458/1024 [MB] (13 MBps) [2024-12-06T10:23:06.322Z] Copying: 471/1024 [MB] (12 MBps) [2024-12-06T10:23:07.263Z] Copying: 481/1024 [MB] (10 MBps) [2024-12-06T10:23:08.205Z] Copying: 491/1024 [MB] (10 MBps) [2024-12-06T10:23:09.146Z] Copying: 501/1024 [MB] (10 MBps) [2024-12-06T10:23:10.085Z] Copying: 512/1024 [MB] (10 MBps) [2024-12-06T10:23:11.469Z] Copying: 524/1024 [MB] (12 MBps) [2024-12-06T10:23:12.408Z] Copying: 553/1024 [MB] (29 MBps) [2024-12-06T10:23:13.352Z] Copying: 567/1024 [MB] (13 MBps) [2024-12-06T10:23:14.294Z] Copying: 586/1024 [MB] (19 MBps) [2024-12-06T10:23:15.237Z] Copying: 601/1024 [MB] (14 MBps) [2024-12-06T10:23:16.181Z] Copying: 612/1024 [MB] (10 MBps) [2024-12-06T10:23:17.125Z] Copying: 627/1024 [MB] (15 MBps) [2024-12-06T10:23:18.511Z] Copying: 637/1024 [MB] (10 MBps) [2024-12-06T10:23:19.084Z] Copying: 650/1024 [MB] (12 MBps) [2024-12-06T10:23:20.471Z] Copying: 664/1024 [MB] (14 MBps) [2024-12-06T10:23:21.413Z] Copying: 683/1024 [MB] (19 MBps) [2024-12-06T10:23:22.415Z] Copying: 701/1024 [MB] (17 MBps) [2024-12-06T10:23:23.368Z] Copying: 720/1024 [MB] (19 MBps) [2024-12-06T10:23:24.313Z] Copying: 739/1024 [MB] (19 MBps) [2024-12-06T10:23:25.260Z] Copying: 749/1024 [MB] (10 MBps) [2024-12-06T10:23:26.200Z] Copying: 760/1024 [MB] (10 MBps) [2024-12-06T10:23:27.141Z] Copying: 779/1024 [MB] (19 MBps) [2024-12-06T10:23:28.085Z] Copying: 814/1024 [MB] (35 MBps) [2024-12-06T10:23:29.472Z] Copying: 828/1024 [MB] (14 MBps) [2024-12-06T10:23:30.414Z] Copying: 847/1024 [MB] (19 MBps) [2024-12-06T10:23:31.358Z] Copying: 861/1024 [MB] (13 MBps) [2024-12-06T10:23:32.301Z] Copying: 876/1024 [MB] (14 MBps) [2024-12-06T10:23:33.243Z] Copying: 887/1024 [MB] (11 MBps) [2024-12-06T10:23:34.183Z] Copying: 906/1024 [MB] (19 MBps) [2024-12-06T10:23:35.126Z] Copying: 925/1024 [MB] (19 MBps) [2024-12-06T10:23:36.509Z] Copying: 939/1024 [MB] (13 MBps) [2024-12-06T10:23:37.082Z] Copying: 954/1024 [MB] (15 MBps) [2024-12-06T10:23:38.471Z] Copying: 965/1024 [MB] (11 MBps) [2024-12-06T10:23:39.416Z] Copying: 978/1024 [MB] (12 MBps) [2024-12-06T10:23:40.359Z] Copying: 989/1024 [MB] (10 MBps) [2024-12-06T10:23:41.304Z] Copying: 1000/1024 [MB] (11 MBps) [2024-12-06T10:23:42.249Z] Copying: 1014/1024 [MB] (14 MBps) [2024-12-06T10:23:42.822Z] Copying: 1048028/1048576 [kB] (8748 kBps) [2024-12-06T10:23:42.822Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-06 10:23:42.570114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.655 [2024-12-06 10:23:42.570188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.655 [2024-12-06 10:23:42.570216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:36.655 [2024-12-06 10:23:42.570225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.655 [2024-12-06 10:23:42.570374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.655 [2024-12-06 10:23:42.573363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.655 [2024-12-06 10:23:42.573538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.655 [2024-12-06 10:23:42.573560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:24:36.655 [2024-12-06 10:23:42.573568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.655 [2024-12-06 10:23:42.584755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.655 [2024-12-06 10:23:42.584907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.655 [2024-12-06 10:23:42.584927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.873 ms 00:24:36.655 [2024-12-06 10:23:42.584943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.655 [2024-12-06 10:23:42.609312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.655 [2024-12-06 10:23:42.609358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:36.655 [2024-12-06 10:23:42.609370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.347 ms 00:24:36.656 [2024-12-06 10:23:42.609379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.656 [2024-12-06 10:23:42.615551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.656 [2024-12-06 10:23:42.615720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:36.656 [2024-12-06 10:23:42.615740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.135 ms 00:24:36.656 [2024-12-06 10:23:42.615757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.656 [2024-12-06 10:23:42.642869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.656 [2024-12-06 10:23:42.643072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:36.656 [2024-12-06 10:23:42.643096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.065 ms 00:24:36.656 [2024-12-06 10:23:42.643106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.656 [2024-12-06 10:23:42.659890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.656 [2024-12-06 10:23:42.659944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:36.656 [2024-12-06 10:23:42.659958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.708 ms 00:24:36.656 [2024-12-06 10:23:42.659966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:42.919024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.919 [2024-12-06 10:23:42.919079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.919 [2024-12-06 10:23:42.919092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 258.998 ms 00:24:36.919 [2024-12-06 10:23:42.919103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:42.946040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.919 [2024-12-06 10:23:42.946239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.919 [2024-12-06 10:23:42.946262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.920 ms 00:24:36.919 [2024-12-06 10:23:42.946270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:42.972722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.919 [2024-12-06 10:23:42.972914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.919 [2024-12-06 10:23:42.972936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.410 ms 00:24:36.919 [2024-12-06 10:23:42.972945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:42.998153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.919 [2024-12-06 10:23:42.998206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.919 [2024-12-06 10:23:42.998219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.121 ms 00:24:36.919 [2024-12-06 10:23:42.998226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:43.023789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.919 [2024-12-06 10:23:43.023989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.919 [2024-12-06 10:23:43.024010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.480 ms 00:24:36.919 [2024-12-06 10:23:43.024019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.919 [2024-12-06 10:23:43.024107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.919 [2024-12-06 10:23:43.024126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104960 / 261120 wr_cnt: 1 state: open 00:24:36.919 [2024-12-06 10:23:43.024138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.919 [2024-12-06 10:23:43.024337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.920 [2024-12-06 10:23:43.024984] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.920 [2024-12-06 10:23:43.024993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02e574f2-a660-4801-8ddf-67882ec9f339 00:24:36.920 [2024-12-06 10:23:43.025002] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104960 00:24:36.920 [2024-12-06 10:23:43.025009] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105920 00:24:36.920 [2024-12-06 10:23:43.025017] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104960 00:24:36.920 [2024-12-06 10:23:43.025026] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:24:36.920 [2024-12-06 10:23:43.025048] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.920 [2024-12-06 10:23:43.025056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.920 [2024-12-06 10:23:43.025064] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.920 [2024-12-06 10:23:43.025071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.920 [2024-12-06 10:23:43.025078] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.920 [2024-12-06 10:23:43.025086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.920 [2024-12-06 10:23:43.025094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.920 [2024-12-06 10:23:43.025103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:24:36.920 [2024-12-06 10:23:43.025111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.920 [2024-12-06 10:23:43.038921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.920 [2024-12-06 10:23:43.039108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.920 [2024-12-06 10:23:43.039135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.775 ms 00:24:36.920 [2024-12-06 10:23:43.039145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.920 [2024-12-06 10:23:43.039570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.921 [2024-12-06 10:23:43.039583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.921 [2024-12-06 10:23:43.039592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:24:36.921 [2024-12-06 10:23:43.039600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.921 [2024-12-06 10:23:43.076510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.921 [2024-12-06 10:23:43.076564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.921 [2024-12-06 10:23:43.076577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.921 [2024-12-06 10:23:43.076587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.921 [2024-12-06 10:23:43.076663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.921 [2024-12-06 10:23:43.076673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.921 [2024-12-06 10:23:43.076683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.921 [2024-12-06 10:23:43.076692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.921 [2024-12-06 10:23:43.076767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.921 [2024-12-06 10:23:43.076783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.921 [2024-12-06 10:23:43.076793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.921 [2024-12-06 10:23:43.076803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.921 [2024-12-06 10:23:43.076820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.921 [2024-12-06 10:23:43.076829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.921 [2024-12-06 10:23:43.076837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.921 [2024-12-06 10:23:43.076845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.163172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.163240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.183 [2024-12-06 10:23:43.163255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.163264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.183 [2024-12-06 10:23:43.234347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.183 [2024-12-06 10:23:43.234506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.183 [2024-12-06 10:23:43.234579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.183 [2024-12-06 10:23:43.234716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:37.183 [2024-12-06 10:23:43.234788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.183 [2024-12-06 10:23:43.234857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.234916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.183 [2024-12-06 10:23:43.234928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.183 [2024-12-06 10:23:43.234936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.183 [2024-12-06 10:23:43.234945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.183 [2024-12-06 10:23:43.235082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 665.784 ms, result 0 00:24:39.099 00:24:39.099 00:24:39.099 10:23:44 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:39.099 [2024-12-06 10:23:44.985527] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:24:39.099 [2024-12-06 10:23:44.985675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79530 ] 00:24:39.099 [2024-12-06 10:23:45.144869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.358 [2024-12-06 10:23:45.272304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.620 [2024-12-06 10:23:45.571244] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:39.620 [2024-12-06 10:23:45.571576] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:39.620 [2024-12-06 10:23:45.732659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.732725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:39.620 [2024-12-06 10:23:45.732740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:39.620 [2024-12-06 10:23:45.732749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.732806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.732819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.620 [2024-12-06 10:23:45.732828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:39.620 [2024-12-06 10:23:45.732836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.732857] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:39.620 [2024-12-06 10:23:45.733635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:39.620 [2024-12-06 10:23:45.733656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.733664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.620 [2024-12-06 10:23:45.733675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:24:39.620 [2024-12-06 10:23:45.733684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.735490] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:39.620 [2024-12-06 10:23:45.750310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.750361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:39.620 [2024-12-06 10:23:45.750376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.822 ms 00:24:39.620 [2024-12-06 10:23:45.750384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.750487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.750499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:39.620 [2024-12-06 10:23:45.750508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:39.620 [2024-12-06 10:23:45.750516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.759153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.759374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.620 [2024-12-06 10:23:45.759395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.554 ms 00:24:39.620 [2024-12-06 10:23:45.759411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.759514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.759524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.620 [2024-12-06 10:23:45.759533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:39.620 [2024-12-06 10:23:45.759542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.759592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.759602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:39.620 [2024-12-06 10:23:45.759610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:39.620 [2024-12-06 10:23:45.759618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.759645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:39.620 [2024-12-06 10:23:45.763653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.763694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.620 [2024-12-06 10:23:45.763708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.014 ms 00:24:39.620 [2024-12-06 10:23:45.763716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.763756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.763765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:39.620 [2024-12-06 10:23:45.763774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:39.620 [2024-12-06 10:23:45.763782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.763840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:39.620 [2024-12-06 10:23:45.763866] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:39.620 [2024-12-06 10:23:45.763904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:39.620 [2024-12-06 10:23:45.763922] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:39.620 [2024-12-06 10:23:45.764029] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:39.620 [2024-12-06 10:23:45.764041] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:39.620 [2024-12-06 10:23:45.764052] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:39.620 [2024-12-06 10:23:45.764063] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764072] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764081] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:39.620 [2024-12-06 10:23:45.764088] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:39.620 [2024-12-06 10:23:45.764116] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:39.620 [2024-12-06 10:23:45.764124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:39.620 [2024-12-06 10:23:45.764134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.764142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:39.620 [2024-12-06 10:23:45.764151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:24:39.620 [2024-12-06 10:23:45.764158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.764242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.620 [2024-12-06 10:23:45.764252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:39.620 [2024-12-06 10:23:45.764259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:39.620 [2024-12-06 10:23:45.764267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.620 [2024-12-06 10:23:45.764376] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:39.620 [2024-12-06 10:23:45.764388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:39.620 [2024-12-06 10:23:45.764396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:39.620 [2024-12-06 10:23:45.764420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:39.620 [2024-12-06 10:23:45.764471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.620 [2024-12-06 10:23:45.764487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:39.620 [2024-12-06 10:23:45.764494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:39.620 [2024-12-06 10:23:45.764501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.620 [2024-12-06 10:23:45.764517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:39.620 [2024-12-06 10:23:45.764525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:39.620 [2024-12-06 10:23:45.764532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:39.620 [2024-12-06 10:23:45.764550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:39.620 [2024-12-06 10:23:45.764571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:39.620 [2024-12-06 10:23:45.764797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:39.620 [2024-12-06 10:23:45.764816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:39.620 [2024-12-06 10:23:45.764822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.620 [2024-12-06 10:23:45.764828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:39.620 [2024-12-06 10:23:45.764835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:39.621 [2024-12-06 10:23:45.764842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.621 [2024-12-06 10:23:45.764849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:39.621 [2024-12-06 10:23:45.764856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:39.621 [2024-12-06 10:23:45.764863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.621 [2024-12-06 10:23:45.764870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:39.621 [2024-12-06 10:23:45.764878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:39.621 [2024-12-06 10:23:45.764885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.621 [2024-12-06 10:23:45.764893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:39.621 [2024-12-06 10:23:45.764900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:39.621 [2024-12-06 10:23:45.764906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.621 [2024-12-06 10:23:45.764913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:39.621 [2024-12-06 10:23:45.764919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:39.621 [2024-12-06 10:23:45.764926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.621 [2024-12-06 10:23:45.764933] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:39.621 [2024-12-06 10:23:45.764941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:39.621 [2024-12-06 10:23:45.764948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.621 [2024-12-06 10:23:45.764955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.621 [2024-12-06 10:23:45.764963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:39.621 [2024-12-06 10:23:45.764972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:39.621 [2024-12-06 10:23:45.764980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:39.621 [2024-12-06 10:23:45.764988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:39.621 [2024-12-06 10:23:45.764995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:39.621 [2024-12-06 10:23:45.765001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:39.621 [2024-12-06 10:23:45.765010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:39.621 [2024-12-06 10:23:45.765021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:39.621 [2024-12-06 10:23:45.765039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:39.621 [2024-12-06 10:23:45.765046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:39.621 [2024-12-06 10:23:45.765053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:39.621 [2024-12-06 10:23:45.765061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:39.621 [2024-12-06 10:23:45.765068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:39.621 [2024-12-06 10:23:45.765075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:39.621 [2024-12-06 10:23:45.765081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:39.621 [2024-12-06 10:23:45.765088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:39.621 [2024-12-06 10:23:45.765095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:39.621 [2024-12-06 10:23:45.765136] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:39.621 [2024-12-06 10:23:45.765145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:39.621 [2024-12-06 10:23:45.765161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:39.621 [2024-12-06 10:23:45.765168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:39.621 [2024-12-06 10:23:45.765175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:39.621 [2024-12-06 10:23:45.765183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.621 [2024-12-06 10:23:45.765191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:39.621 [2024-12-06 10:23:45.765199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:24:39.621 [2024-12-06 10:23:45.765206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.797769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.797998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.895 [2024-12-06 10:23:45.798029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.514 ms 00:24:39.895 [2024-12-06 10:23:45.798042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.798141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.798151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:39.895 [2024-12-06 10:23:45.798160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:39.895 [2024-12-06 10:23:45.798168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.842415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.842493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.895 [2024-12-06 10:23:45.842507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.182 ms 00:24:39.895 [2024-12-06 10:23:45.842515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.842567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.842596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.895 [2024-12-06 10:23:45.842609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:39.895 [2024-12-06 10:23:45.842617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.843229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.843278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.895 [2024-12-06 10:23:45.843289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:24:39.895 [2024-12-06 10:23:45.843298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.895 [2024-12-06 10:23:45.843478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.895 [2024-12-06 10:23:45.843489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.895 [2024-12-06 10:23:45.843505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:24:39.895 [2024-12-06 10:23:45.843513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.859509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.859561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.896 [2024-12-06 10:23:45.859572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.975 ms 00:24:39.896 [2024-12-06 10:23:45.859581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.874311] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:39.896 [2024-12-06 10:23:45.874566] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:39.896 [2024-12-06 10:23:45.874588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.874597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:39.896 [2024-12-06 10:23:45.874607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.892 ms 00:24:39.896 [2024-12-06 10:23:45.874615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.900794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.900847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:39.896 [2024-12-06 10:23:45.900859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.128 ms 00:24:39.896 [2024-12-06 10:23:45.900867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.914023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.914073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:39.896 [2024-12-06 10:23:45.914086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.096 ms 00:24:39.896 [2024-12-06 10:23:45.914094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.927162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.927211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:39.896 [2024-12-06 10:23:45.927224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:24:39.896 [2024-12-06 10:23:45.927232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.927910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.927935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:39.896 [2024-12-06 10:23:45.927949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:24:39.896 [2024-12-06 10:23:45.927957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:45.995412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:45.995503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:39.896 [2024-12-06 10:23:45.995527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.435 ms 00:24:39.896 [2024-12-06 10:23:45.995537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.006814] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:39.896 [2024-12-06 10:23:46.010019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.010068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:39.896 [2024-12-06 10:23:46.010082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.420 ms 00:24:39.896 [2024-12-06 10:23:46.010091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.010183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.010195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:39.896 [2024-12-06 10:23:46.010209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:39.896 [2024-12-06 10:23:46.010217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.012034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.012087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:39.896 [2024-12-06 10:23:46.012124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:24:39.896 [2024-12-06 10:23:46.012133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.012165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.012174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:39.896 [2024-12-06 10:23:46.012183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:39.896 [2024-12-06 10:23:46.012190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.012238] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:39.896 [2024-12-06 10:23:46.012250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.012258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:39.896 [2024-12-06 10:23:46.012267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:39.896 [2024-12-06 10:23:46.012275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.039092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.039145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:39.896 [2024-12-06 10:23:46.039166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.798 ms 00:24:39.896 [2024-12-06 10:23:46.039175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.039264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.896 [2024-12-06 10:23:46.039274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:39.896 [2024-12-06 10:23:46.039284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:39.896 [2024-12-06 10:23:46.039293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.896 [2024-12-06 10:23:46.040780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.586 ms, result 0 00:24:41.280  [2024-12-06T10:23:48.390Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T10:23:49.334Z] Copying: 42/1024 [MB] (19 MBps) [2024-12-06T10:23:50.291Z] Copying: 57/1024 [MB] (15 MBps) [2024-12-06T10:23:51.251Z] Copying: 75/1024 [MB] (17 MBps) [2024-12-06T10:23:52.641Z] Copying: 93/1024 [MB] (18 MBps) [2024-12-06T10:23:53.587Z] Copying: 110/1024 [MB] (16 MBps) [2024-12-06T10:23:54.533Z] Copying: 128/1024 [MB] (17 MBps) [2024-12-06T10:23:55.475Z] Copying: 145/1024 [MB] (17 MBps) [2024-12-06T10:23:56.416Z] Copying: 164/1024 [MB] (18 MBps) [2024-12-06T10:23:57.358Z] Copying: 177/1024 [MB] (12 MBps) [2024-12-06T10:23:58.298Z] Copying: 194/1024 [MB] (17 MBps) [2024-12-06T10:23:59.237Z] Copying: 208/1024 [MB] (14 MBps) [2024-12-06T10:24:00.621Z] Copying: 221/1024 [MB] (12 MBps) [2024-12-06T10:24:01.566Z] Copying: 232/1024 [MB] (10 MBps) [2024-12-06T10:24:02.508Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-06T10:24:03.452Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-06T10:24:04.397Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-06T10:24:05.340Z] Copying: 273/1024 [MB] (10 MBps) [2024-12-06T10:24:06.280Z] Copying: 283/1024 [MB] (10 MBps) [2024-12-06T10:24:07.662Z] Copying: 293/1024 [MB] (10 MBps) [2024-12-06T10:24:08.236Z] Copying: 304/1024 [MB] (10 MBps) [2024-12-06T10:24:09.625Z] Copying: 315/1024 [MB] (11 MBps) [2024-12-06T10:24:10.571Z] Copying: 326/1024 [MB] (10 MBps) [2024-12-06T10:24:11.516Z] Copying: 337/1024 [MB] (11 MBps) [2024-12-06T10:24:12.461Z] Copying: 348/1024 [MB] (10 MBps) [2024-12-06T10:24:13.406Z] Copying: 358/1024 [MB] (10 MBps) [2024-12-06T10:24:14.352Z] Copying: 369/1024 [MB] (10 MBps) [2024-12-06T10:24:15.294Z] Copying: 381/1024 [MB] (12 MBps) [2024-12-06T10:24:16.236Z] Copying: 403/1024 [MB] (21 MBps) [2024-12-06T10:24:17.625Z] Copying: 416/1024 [MB] (13 MBps) [2024-12-06T10:24:18.569Z] Copying: 431/1024 [MB] (14 MBps) [2024-12-06T10:24:19.541Z] Copying: 448/1024 [MB] (17 MBps) [2024-12-06T10:24:20.521Z] Copying: 469/1024 [MB] (21 MBps) [2024-12-06T10:24:21.462Z] Copying: 483/1024 [MB] (14 MBps) [2024-12-06T10:24:22.402Z] Copying: 494/1024 [MB] (10 MBps) [2024-12-06T10:24:23.343Z] Copying: 504/1024 [MB] (10 MBps) [2024-12-06T10:24:24.288Z] Copying: 515/1024 [MB] (10 MBps) [2024-12-06T10:24:25.674Z] Copying: 525/1024 [MB] (10 MBps) [2024-12-06T10:24:26.246Z] Copying: 539/1024 [MB] (13 MBps) [2024-12-06T10:24:27.634Z] Copying: 549/1024 [MB] (10 MBps) [2024-12-06T10:24:28.577Z] Copying: 560/1024 [MB] (11 MBps) [2024-12-06T10:24:29.522Z] Copying: 571/1024 [MB] (10 MBps) [2024-12-06T10:24:30.465Z] Copying: 582/1024 [MB] (11 MBps) [2024-12-06T10:24:31.411Z] Copying: 594/1024 [MB] (11 MBps) [2024-12-06T10:24:32.357Z] Copying: 605/1024 [MB] (10 MBps) [2024-12-06T10:24:33.317Z] Copying: 618/1024 [MB] (12 MBps) [2024-12-06T10:24:34.259Z] Copying: 635/1024 [MB] (17 MBps) [2024-12-06T10:24:35.648Z] Copying: 657/1024 [MB] (21 MBps) [2024-12-06T10:24:36.591Z] Copying: 671/1024 [MB] (14 MBps) [2024-12-06T10:24:37.535Z] Copying: 690/1024 [MB] (18 MBps) [2024-12-06T10:24:38.495Z] Copying: 711/1024 [MB] (21 MBps) [2024-12-06T10:24:39.439Z] Copying: 732/1024 [MB] (20 MBps) [2024-12-06T10:24:40.383Z] Copying: 749/1024 [MB] (17 MBps) [2024-12-06T10:24:41.326Z] Copying: 768/1024 [MB] (18 MBps) [2024-12-06T10:24:42.269Z] Copying: 787/1024 [MB] (19 MBps) [2024-12-06T10:24:43.655Z] Copying: 809/1024 [MB] (22 MBps) [2024-12-06T10:24:44.599Z] Copying: 820/1024 [MB] (10 MBps) [2024-12-06T10:24:45.550Z] Copying: 832/1024 [MB] (11 MBps) [2024-12-06T10:24:46.493Z] Copying: 850/1024 [MB] (18 MBps) [2024-12-06T10:24:47.435Z] Copying: 868/1024 [MB] (18 MBps) [2024-12-06T10:24:48.423Z] Copying: 887/1024 [MB] (18 MBps) [2024-12-06T10:24:49.368Z] Copying: 903/1024 [MB] (16 MBps) [2024-12-06T10:24:50.334Z] Copying: 920/1024 [MB] (16 MBps) [2024-12-06T10:24:51.279Z] Copying: 935/1024 [MB] (14 MBps) [2024-12-06T10:24:52.667Z] Copying: 947/1024 [MB] (12 MBps) [2024-12-06T10:24:53.238Z] Copying: 961/1024 [MB] (13 MBps) [2024-12-06T10:24:54.622Z] Copying: 972/1024 [MB] (10 MBps) [2024-12-06T10:24:55.565Z] Copying: 982/1024 [MB] (10 MBps) [2024-12-06T10:24:56.509Z] Copying: 993/1024 [MB] (10 MBps) [2024-12-06T10:24:57.453Z] Copying: 1004/1024 [MB] (10 MBps) [2024-12-06T10:24:58.398Z] Copying: 1015/1024 [MB] (10 MBps) [2024-12-06T10:24:58.398Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-06 10:24:58.171491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.171919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:52.231 [2024-12-06 10:24:58.171968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:52.231 [2024-12-06 10:24:58.171984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.172035] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:52.231 [2024-12-06 10:24:58.176794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.176843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:52.231 [2024-12-06 10:24:58.176856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.731 ms 00:25:52.231 [2024-12-06 10:24:58.176865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.177100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.177111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:52.231 [2024-12-06 10:24:58.177121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:25:52.231 [2024-12-06 10:24:58.177136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.184269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.184321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:52.231 [2024-12-06 10:24:58.184335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.114 ms 00:25:52.231 [2024-12-06 10:24:58.184345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.190655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.190698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:52.231 [2024-12-06 10:24:58.190709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:25:52.231 [2024-12-06 10:24:58.190725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.217629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.217677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:52.231 [2024-12-06 10:24:58.217690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.856 ms 00:25:52.231 [2024-12-06 10:24:58.217699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.231 [2024-12-06 10:24:58.234473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.231 [2024-12-06 10:24:58.234522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:52.231 [2024-12-06 10:24:58.234536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.725 ms 00:25:52.231 [2024-12-06 10:24:58.234545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.493 [2024-12-06 10:24:58.612322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.493 [2024-12-06 10:24:58.612381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:52.493 [2024-12-06 10:24:58.612395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 377.730 ms 00:25:52.493 [2024-12-06 10:24:58.612405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.493 [2024-12-06 10:24:58.639087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.493 [2024-12-06 10:24:58.639139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:52.493 [2024-12-06 10:24:58.639153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.664 ms 00:25:52.493 [2024-12-06 10:24:58.639161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.756 [2024-12-06 10:24:58.664407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.756 [2024-12-06 10:24:58.664471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:52.756 [2024-12-06 10:24:58.664484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.200 ms 00:25:52.756 [2024-12-06 10:24:58.664492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.756 [2024-12-06 10:24:58.688920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.756 [2024-12-06 10:24:58.688967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:52.756 [2024-12-06 10:24:58.688978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.383 ms 00:25:52.756 [2024-12-06 10:24:58.688986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.756 [2024-12-06 10:24:58.713647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.756 [2024-12-06 10:24:58.713692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:52.756 [2024-12-06 10:24:58.713703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.589 ms 00:25:52.756 [2024-12-06 10:24:58.713710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.756 [2024-12-06 10:24:58.713751] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:52.756 [2024-12-06 10:24:58.713768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:52.756 [2024-12-06 10:24:58.713779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.713999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:52.756 [2024-12-06 10:24:58.714201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:52.757 [2024-12-06 10:24:58.714591] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:52.757 [2024-12-06 10:24:58.714600] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02e574f2-a660-4801-8ddf-67882ec9f339 00:25:52.757 [2024-12-06 10:24:58.714609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:52.757 [2024-12-06 10:24:58.714617] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27072 00:25:52.757 [2024-12-06 10:24:58.714625] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26112 00:25:52.757 [2024-12-06 10:24:58.714635] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0368 00:25:52.757 [2024-12-06 10:24:58.714647] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:52.757 [2024-12-06 10:24:58.714663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:52.757 [2024-12-06 10:24:58.714671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:52.757 [2024-12-06 10:24:58.714678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:52.757 [2024-12-06 10:24:58.714685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:52.757 [2024-12-06 10:24:58.714693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.757 [2024-12-06 10:24:58.714702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:52.757 [2024-12-06 10:24:58.714711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:25:52.757 [2024-12-06 10:24:58.714719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.728270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.757 [2024-12-06 10:24:58.728318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:52.757 [2024-12-06 10:24:58.728335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.533 ms 00:25:52.757 [2024-12-06 10:24:58.728343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.728762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.757 [2024-12-06 10:24:58.728786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:52.757 [2024-12-06 10:24:58.728796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:25:52.757 [2024-12-06 10:24:58.728805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.764891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.764947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.757 [2024-12-06 10:24:58.764959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.764968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.765041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.765052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.757 [2024-12-06 10:24:58.765061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.765071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.765134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.765147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.757 [2024-12-06 10:24:58.765161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.765170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.765186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.765196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.757 [2024-12-06 10:24:58.765204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.765213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.849006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.849070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.757 [2024-12-06 10:24:58.849082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.849091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.917048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.917105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.757 [2024-12-06 10:24:58.917116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.917125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.917207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.917218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.757 [2024-12-06 10:24:58.917227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.917238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.917278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.917289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.757 [2024-12-06 10:24:58.917297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.757 [2024-12-06 10:24:58.917305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.757 [2024-12-06 10:24:58.917400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.757 [2024-12-06 10:24:58.917412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.758 [2024-12-06 10:24:58.917421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.758 [2024-12-06 10:24:58.917429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.758 [2024-12-06 10:24:58.917484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.758 [2024-12-06 10:24:58.917495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.758 [2024-12-06 10:24:58.917503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.758 [2024-12-06 10:24:58.917512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.758 [2024-12-06 10:24:58.917555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.758 [2024-12-06 10:24:58.917565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.758 [2024-12-06 10:24:58.917573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.758 [2024-12-06 10:24:58.917581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.758 [2024-12-06 10:24:58.917630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.758 [2024-12-06 10:24:58.917642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.758 [2024-12-06 10:24:58.917650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.758 [2024-12-06 10:24:58.917658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.758 [2024-12-06 10:24:58.917795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 746.305 ms, result 0 00:25:53.703 00:25:53.703 00:25:53.703 10:24:59 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.252 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.252 10:25:01 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77325 00:25:56.252 10:25:01 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77325 ']' 00:25:56.252 10:25:01 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77325 00:25:56.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77325) - No such process 00:25:56.253 10:25:01 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77325 is not found' 00:25:56.253 Process with pid 77325 is not found 00:25:56.253 Remove shared memory files 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:56.253 10:25:01 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:56.253 00:25:56.253 real 4m51.150s 00:25:56.253 user 4m38.631s 00:25:56.253 sys 0m11.749s 00:25:56.253 ************************************ 00:25:56.253 END TEST ftl_restore 00:25:56.253 ************************************ 00:25:56.253 10:25:01 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.253 10:25:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:56.253 10:25:02 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.253 10:25:02 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:56.253 10:25:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.253 10:25:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:56.253 ************************************ 00:25:56.253 START TEST ftl_dirty_shutdown 00:25:56.253 ************************************ 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.253 * Looking for test storage... 00:25:56.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:56.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.253 --rc genhtml_branch_coverage=1 00:25:56.253 --rc genhtml_function_coverage=1 00:25:56.253 --rc genhtml_legend=1 00:25:56.253 --rc geninfo_all_blocks=1 00:25:56.253 --rc geninfo_unexecuted_blocks=1 00:25:56.253 00:25:56.253 ' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:56.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.253 --rc genhtml_branch_coverage=1 00:25:56.253 --rc genhtml_function_coverage=1 00:25:56.253 --rc genhtml_legend=1 00:25:56.253 --rc geninfo_all_blocks=1 00:25:56.253 --rc geninfo_unexecuted_blocks=1 00:25:56.253 00:25:56.253 ' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:56.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.253 --rc genhtml_branch_coverage=1 00:25:56.253 --rc genhtml_function_coverage=1 00:25:56.253 --rc genhtml_legend=1 00:25:56.253 --rc geninfo_all_blocks=1 00:25:56.253 --rc geninfo_unexecuted_blocks=1 00:25:56.253 00:25:56.253 ' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:56.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.253 --rc genhtml_branch_coverage=1 00:25:56.253 --rc genhtml_function_coverage=1 00:25:56.253 --rc genhtml_legend=1 00:25:56.253 --rc geninfo_all_blocks=1 00:25:56.253 --rc geninfo_unexecuted_blocks=1 00:25:56.253 00:25:56.253 ' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80386 00:25:56.253 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80386 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80386 ']' 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:56.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:56.254 10:25:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:56.254 [2024-12-06 10:25:02.252175] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:25:56.254 [2024-12-06 10:25:02.252310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80386 ] 00:25:56.254 [2024-12-06 10:25:02.414764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.514 [2024-12-06 10:25:02.535921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:57.088 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:57.662 { 00:25:57.662 "name": "nvme0n1", 00:25:57.662 "aliases": [ 00:25:57.662 "4c7c7325-5630-4c1e-b051-c05db570eb65" 00:25:57.662 ], 00:25:57.662 "product_name": "NVMe disk", 00:25:57.662 "block_size": 4096, 00:25:57.662 "num_blocks": 1310720, 00:25:57.662 "uuid": "4c7c7325-5630-4c1e-b051-c05db570eb65", 00:25:57.662 "numa_id": -1, 00:25:57.662 "assigned_rate_limits": { 00:25:57.662 "rw_ios_per_sec": 0, 00:25:57.662 "rw_mbytes_per_sec": 0, 00:25:57.662 "r_mbytes_per_sec": 0, 00:25:57.662 "w_mbytes_per_sec": 0 00:25:57.662 }, 00:25:57.662 "claimed": true, 00:25:57.662 "claim_type": "read_many_write_one", 00:25:57.662 "zoned": false, 00:25:57.662 "supported_io_types": { 00:25:57.662 "read": true, 00:25:57.662 "write": true, 00:25:57.662 "unmap": true, 00:25:57.662 "flush": true, 00:25:57.662 "reset": true, 00:25:57.662 "nvme_admin": true, 00:25:57.662 "nvme_io": true, 00:25:57.662 "nvme_io_md": false, 00:25:57.662 "write_zeroes": true, 00:25:57.662 "zcopy": false, 00:25:57.662 "get_zone_info": false, 00:25:57.662 "zone_management": false, 00:25:57.662 "zone_append": false, 00:25:57.662 "compare": true, 00:25:57.662 "compare_and_write": false, 00:25:57.662 "abort": true, 00:25:57.662 "seek_hole": false, 00:25:57.662 "seek_data": false, 00:25:57.662 "copy": true, 00:25:57.662 "nvme_iov_md": false 00:25:57.662 }, 00:25:57.662 "driver_specific": { 00:25:57.662 "nvme": [ 00:25:57.662 { 00:25:57.662 "pci_address": "0000:00:11.0", 00:25:57.662 "trid": { 00:25:57.662 "trtype": "PCIe", 00:25:57.662 "traddr": "0000:00:11.0" 00:25:57.662 }, 00:25:57.662 "ctrlr_data": { 00:25:57.662 "cntlid": 0, 00:25:57.662 "vendor_id": "0x1b36", 00:25:57.662 "model_number": "QEMU NVMe Ctrl", 00:25:57.662 "serial_number": "12341", 00:25:57.662 "firmware_revision": "8.0.0", 00:25:57.662 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:57.662 "oacs": { 00:25:57.662 "security": 0, 00:25:57.662 "format": 1, 00:25:57.662 "firmware": 0, 00:25:57.662 "ns_manage": 1 00:25:57.662 }, 00:25:57.662 "multi_ctrlr": false, 00:25:57.662 "ana_reporting": false 00:25:57.662 }, 00:25:57.662 "vs": { 00:25:57.662 "nvme_version": "1.4" 00:25:57.662 }, 00:25:57.662 "ns_data": { 00:25:57.662 "id": 1, 00:25:57.662 "can_share": false 00:25:57.662 } 00:25:57.662 } 00:25:57.662 ], 00:25:57.662 "mp_policy": "active_passive" 00:25:57.662 } 00:25:57.662 } 00:25:57.662 ]' 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:57.662 10:25:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:57.923 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=71f2b84d-ae6b-45de-8191-eea688c1bbb1 00:25:57.923 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:57.923 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71f2b84d-ae6b-45de-8191-eea688c1bbb1 00:25:58.185 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:58.446 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=7fcb2fa2-2794-40b7-8d4a-5999d968fa6a 00:25:58.446 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7fcb2fa2-2794-40b7-8d4a-5999d968fa6a 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:58.707 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:58.969 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:58.969 { 00:25:58.969 "name": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:25:58.969 "aliases": [ 00:25:58.969 "lvs/nvme0n1p0" 00:25:58.969 ], 00:25:58.969 "product_name": "Logical Volume", 00:25:58.969 "block_size": 4096, 00:25:58.969 "num_blocks": 26476544, 00:25:58.969 "uuid": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:25:58.969 "assigned_rate_limits": { 00:25:58.969 "rw_ios_per_sec": 0, 00:25:58.969 "rw_mbytes_per_sec": 0, 00:25:58.969 "r_mbytes_per_sec": 0, 00:25:58.969 "w_mbytes_per_sec": 0 00:25:58.969 }, 00:25:58.969 "claimed": false, 00:25:58.969 "zoned": false, 00:25:58.969 "supported_io_types": { 00:25:58.969 "read": true, 00:25:58.969 "write": true, 00:25:58.969 "unmap": true, 00:25:58.969 "flush": false, 00:25:58.969 "reset": true, 00:25:58.969 "nvme_admin": false, 00:25:58.969 "nvme_io": false, 00:25:58.969 "nvme_io_md": false, 00:25:58.969 "write_zeroes": true, 00:25:58.969 "zcopy": false, 00:25:58.969 "get_zone_info": false, 00:25:58.969 "zone_management": false, 00:25:58.969 "zone_append": false, 00:25:58.969 "compare": false, 00:25:58.969 "compare_and_write": false, 00:25:58.969 "abort": false, 00:25:58.969 "seek_hole": true, 00:25:58.969 "seek_data": true, 00:25:58.969 "copy": false, 00:25:58.969 "nvme_iov_md": false 00:25:58.969 }, 00:25:58.969 "driver_specific": { 00:25:58.969 "lvol": { 00:25:58.969 "lvol_store_uuid": "7fcb2fa2-2794-40b7-8d4a-5999d968fa6a", 00:25:58.969 "base_bdev": "nvme0n1", 00:25:58.969 "thin_provision": true, 00:25:58.969 "num_allocated_clusters": 0, 00:25:58.969 "snapshot": false, 00:25:58.969 "clone": false, 00:25:58.969 "esnap_clone": false 00:25:58.969 } 00:25:58.969 } 00:25:58.969 } 00:25:58.969 ]' 00:25:58.969 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:58.969 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:58.969 10:25:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:58.969 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:59.228 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:59.228 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:59.228 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:59.228 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:59.229 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.229 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.229 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.229 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:59.488 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:59.488 { 00:25:59.488 "name": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:25:59.488 "aliases": [ 00:25:59.488 "lvs/nvme0n1p0" 00:25:59.488 ], 00:25:59.488 "product_name": "Logical Volume", 00:25:59.488 "block_size": 4096, 00:25:59.488 "num_blocks": 26476544, 00:25:59.488 "uuid": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:25:59.488 "assigned_rate_limits": { 00:25:59.488 "rw_ios_per_sec": 0, 00:25:59.488 "rw_mbytes_per_sec": 0, 00:25:59.488 "r_mbytes_per_sec": 0, 00:25:59.488 "w_mbytes_per_sec": 0 00:25:59.488 }, 00:25:59.488 "claimed": false, 00:25:59.488 "zoned": false, 00:25:59.488 "supported_io_types": { 00:25:59.488 "read": true, 00:25:59.488 "write": true, 00:25:59.488 "unmap": true, 00:25:59.488 "flush": false, 00:25:59.488 "reset": true, 00:25:59.488 "nvme_admin": false, 00:25:59.488 "nvme_io": false, 00:25:59.488 "nvme_io_md": false, 00:25:59.488 "write_zeroes": true, 00:25:59.488 "zcopy": false, 00:25:59.488 "get_zone_info": false, 00:25:59.488 "zone_management": false, 00:25:59.488 "zone_append": false, 00:25:59.488 "compare": false, 00:25:59.488 "compare_and_write": false, 00:25:59.489 "abort": false, 00:25:59.489 "seek_hole": true, 00:25:59.489 "seek_data": true, 00:25:59.489 "copy": false, 00:25:59.489 "nvme_iov_md": false 00:25:59.489 }, 00:25:59.489 "driver_specific": { 00:25:59.489 "lvol": { 00:25:59.489 "lvol_store_uuid": "7fcb2fa2-2794-40b7-8d4a-5999d968fa6a", 00:25:59.489 "base_bdev": "nvme0n1", 00:25:59.489 "thin_provision": true, 00:25:59.489 "num_allocated_clusters": 0, 00:25:59.489 "snapshot": false, 00:25:59.489 "clone": false, 00:25:59.489 "esnap_clone": false 00:25:59.489 } 00:25:59.489 } 00:25:59.489 } 00:25:59.489 ]' 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:59.489 10:25:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5ef2cb1e-337a-4625-8062-992cbdc796af 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.749 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ef2cb1e-337a-4625-8062-992cbdc796af 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.010 { 00:26:00.010 "name": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:26:00.010 "aliases": [ 00:26:00.010 "lvs/nvme0n1p0" 00:26:00.010 ], 00:26:00.010 "product_name": "Logical Volume", 00:26:00.010 "block_size": 4096, 00:26:00.010 "num_blocks": 26476544, 00:26:00.010 "uuid": "5ef2cb1e-337a-4625-8062-992cbdc796af", 00:26:00.010 "assigned_rate_limits": { 00:26:00.010 "rw_ios_per_sec": 0, 00:26:00.010 "rw_mbytes_per_sec": 0, 00:26:00.010 "r_mbytes_per_sec": 0, 00:26:00.010 "w_mbytes_per_sec": 0 00:26:00.010 }, 00:26:00.010 "claimed": false, 00:26:00.010 "zoned": false, 00:26:00.010 "supported_io_types": { 00:26:00.010 "read": true, 00:26:00.010 "write": true, 00:26:00.010 "unmap": true, 00:26:00.010 "flush": false, 00:26:00.010 "reset": true, 00:26:00.010 "nvme_admin": false, 00:26:00.010 "nvme_io": false, 00:26:00.010 "nvme_io_md": false, 00:26:00.010 "write_zeroes": true, 00:26:00.010 "zcopy": false, 00:26:00.010 "get_zone_info": false, 00:26:00.010 "zone_management": false, 00:26:00.010 "zone_append": false, 00:26:00.010 "compare": false, 00:26:00.010 "compare_and_write": false, 00:26:00.010 "abort": false, 00:26:00.010 "seek_hole": true, 00:26:00.010 "seek_data": true, 00:26:00.010 "copy": false, 00:26:00.010 "nvme_iov_md": false 00:26:00.010 }, 00:26:00.010 "driver_specific": { 00:26:00.010 "lvol": { 00:26:00.010 "lvol_store_uuid": "7fcb2fa2-2794-40b7-8d4a-5999d968fa6a", 00:26:00.010 "base_bdev": "nvme0n1", 00:26:00.010 "thin_provision": true, 00:26:00.010 "num_allocated_clusters": 0, 00:26:00.010 "snapshot": false, 00:26:00.010 "clone": false, 00:26:00.010 "esnap_clone": false 00:26:00.010 } 00:26:00.010 } 00:26:00.010 } 00:26:00.010 ]' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5ef2cb1e-337a-4625-8062-992cbdc796af --l2p_dram_limit 10' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:00.010 10:25:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5ef2cb1e-337a-4625-8062-992cbdc796af --l2p_dram_limit 10 -c nvc0n1p0 00:26:00.272 [2024-12-06 10:25:06.181013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.181054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.272 [2024-12-06 10:25:06.181066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:00.272 [2024-12-06 10:25:06.181072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.181125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.181133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.272 [2024-12-06 10:25:06.181141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:00.272 [2024-12-06 10:25:06.181147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.181166] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.272 [2024-12-06 10:25:06.181760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.272 [2024-12-06 10:25:06.181782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.181789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.272 [2024-12-06 10:25:06.181797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:26:00.272 [2024-12-06 10:25:06.181803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.181855] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6e267339-64bd-4d3b-92f1-986b0822465a 00:26:00.272 [2024-12-06 10:25:06.182795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.182826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:00.272 [2024-12-06 10:25:06.182834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:00.272 [2024-12-06 10:25:06.182841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.187580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.187607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.272 [2024-12-06 10:25:06.187615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:26:00.272 [2024-12-06 10:25:06.187623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.187688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.187697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.272 [2024-12-06 10:25:06.187703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:00.272 [2024-12-06 10:25:06.187713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.187751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.187760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.272 [2024-12-06 10:25:06.187767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:00.272 [2024-12-06 10:25:06.187775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.187791] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.272 [2024-12-06 10:25:06.190640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.190666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.272 [2024-12-06 10:25:06.190676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.852 ms 00:26:00.272 [2024-12-06 10:25:06.190682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.272 [2024-12-06 10:25:06.190709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.272 [2024-12-06 10:25:06.190716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.273 [2024-12-06 10:25:06.190724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:00.273 [2024-12-06 10:25:06.190730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.273 [2024-12-06 10:25:06.190743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:00.273 [2024-12-06 10:25:06.190850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.273 [2024-12-06 10:25:06.190862] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.273 [2024-12-06 10:25:06.190871] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.273 [2024-12-06 10:25:06.190880] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.273 [2024-12-06 10:25:06.190887] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.273 [2024-12-06 10:25:06.190894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:00.273 [2024-12-06 10:25:06.190901] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.273 [2024-12-06 10:25:06.190909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.273 [2024-12-06 10:25:06.190915] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.273 [2024-12-06 10:25:06.190922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.273 [2024-12-06 10:25:06.190932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.273 [2024-12-06 10:25:06.190939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:26:00.273 [2024-12-06 10:25:06.190944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.273 [2024-12-06 10:25:06.191011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.273 [2024-12-06 10:25:06.191017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.273 [2024-12-06 10:25:06.191024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:00.273 [2024-12-06 10:25:06.191031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.273 [2024-12-06 10:25:06.191104] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.273 [2024-12-06 10:25:06.191116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.273 [2024-12-06 10:25:06.191124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.273 [2024-12-06 10:25:06.191143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.273 [2024-12-06 10:25:06.191161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.273 [2024-12-06 10:25:06.191172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.273 [2024-12-06 10:25:06.191177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:00.273 [2024-12-06 10:25:06.191186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.273 [2024-12-06 10:25:06.191191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.273 [2024-12-06 10:25:06.191197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:00.273 [2024-12-06 10:25:06.191202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.273 [2024-12-06 10:25:06.191215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.273 [2024-12-06 10:25:06.191232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.273 [2024-12-06 10:25:06.191249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.273 [2024-12-06 10:25:06.191267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.273 [2024-12-06 10:25:06.191284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.273 [2024-12-06 10:25:06.191303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.273 [2024-12-06 10:25:06.191313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.273 [2024-12-06 10:25:06.191318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:00.273 [2024-12-06 10:25:06.191325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.273 [2024-12-06 10:25:06.191330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.273 [2024-12-06 10:25:06.191337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:00.273 [2024-12-06 10:25:06.191342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.273 [2024-12-06 10:25:06.191353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:00.273 [2024-12-06 10:25:06.191359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191365] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.273 [2024-12-06 10:25:06.191371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.273 [2024-12-06 10:25:06.191377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.273 [2024-12-06 10:25:06.191388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.273 [2024-12-06 10:25:06.191396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.273 [2024-12-06 10:25:06.191401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.273 [2024-12-06 10:25:06.191408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.273 [2024-12-06 10:25:06.191412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.273 [2024-12-06 10:25:06.191418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.273 [2024-12-06 10:25:06.191425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.273 [2024-12-06 10:25:06.191436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:00.273 [2024-12-06 10:25:06.191466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:00.273 [2024-12-06 10:25:06.191471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:00.273 [2024-12-06 10:25:06.191479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:00.273 [2024-12-06 10:25:06.191485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:00.273 [2024-12-06 10:25:06.191491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:00.273 [2024-12-06 10:25:06.191497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:00.273 [2024-12-06 10:25:06.191504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:00.273 [2024-12-06 10:25:06.191509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:00.273 [2024-12-06 10:25:06.191518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:00.273 [2024-12-06 10:25:06.191548] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.273 [2024-12-06 10:25:06.191555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.273 [2024-12-06 10:25:06.191569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.273 [2024-12-06 10:25:06.191574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.273 [2024-12-06 10:25:06.191581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.273 [2024-12-06 10:25:06.191586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.273 [2024-12-06 10:25:06.191593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.274 [2024-12-06 10:25:06.191599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:26:00.274 [2024-12-06 10:25:06.191605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.274 [2024-12-06 10:25:06.191634] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:00.274 [2024-12-06 10:25:06.191644] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:03.573 [2024-12-06 10:25:09.467139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.467228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:03.573 [2024-12-06 10:25:09.467245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3275.490 ms 00:26:03.573 [2024-12-06 10:25:09.467256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.498415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.498503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.573 [2024-12-06 10:25:09.498517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.922 ms 00:26:03.573 [2024-12-06 10:25:09.498528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.498668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.498682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:03.573 [2024-12-06 10:25:09.498695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:03.573 [2024-12-06 10:25:09.498709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.534007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.534064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.573 [2024-12-06 10:25:09.534077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.250 ms 00:26:03.573 [2024-12-06 10:25:09.534087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.534126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.534137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.573 [2024-12-06 10:25:09.534147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.573 [2024-12-06 10:25:09.534164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.534775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.534816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.573 [2024-12-06 10:25:09.534827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:26:03.573 [2024-12-06 10:25:09.534837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.534955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.534969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.573 [2024-12-06 10:25:09.534978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:26:03.573 [2024-12-06 10:25:09.534991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.552100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.552162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.573 [2024-12-06 10:25:09.552174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.090 ms 00:26:03.573 [2024-12-06 10:25:09.552184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.574028] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:03.573 [2024-12-06 10:25:09.577846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.577894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:03.573 [2024-12-06 10:25:09.577908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:26:03.573 [2024-12-06 10:25:09.577917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.684794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.684855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:03.573 [2024-12-06 10:25:09.684873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.828 ms 00:26:03.573 [2024-12-06 10:25:09.684883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.685097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.685110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:03.573 [2024-12-06 10:25:09.685124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:26:03.573 [2024-12-06 10:25:09.685133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.711298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.711348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:03.573 [2024-12-06 10:25:09.711364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.109 ms 00:26:03.573 [2024-12-06 10:25:09.711375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.736897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.736946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:03.573 [2024-12-06 10:25:09.736962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.468 ms 00:26:03.573 [2024-12-06 10:25:09.736969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.573 [2024-12-06 10:25:09.737594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.573 [2024-12-06 10:25:09.737624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:03.573 [2024-12-06 10:25:09.737641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:26:03.573 [2024-12-06 10:25:09.737648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.825284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.825338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:03.835 [2024-12-06 10:25:09.825357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.588 ms 00:26:03.835 [2024-12-06 10:25:09.825366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.852851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.852901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:03.835 [2024-12-06 10:25:09.852917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.375 ms 00:26:03.835 [2024-12-06 10:25:09.852925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.878814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.878864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:03.835 [2024-12-06 10:25:09.878879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.834 ms 00:26:03.835 [2024-12-06 10:25:09.878886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.904816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.904866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:03.835 [2024-12-06 10:25:09.904880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:26:03.835 [2024-12-06 10:25:09.904888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.904944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.904954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:03.835 [2024-12-06 10:25:09.904969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:03.835 [2024-12-06 10:25:09.904977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.905070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.835 [2024-12-06 10:25:09.905083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:03.835 [2024-12-06 10:25:09.905094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:03.835 [2024-12-06 10:25:09.905102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.835 [2024-12-06 10:25:09.907358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3725.769 ms, result 0 00:26:03.835 { 00:26:03.835 "name": "ftl0", 00:26:03.835 "uuid": "6e267339-64bd-4d3b-92f1-986b0822465a" 00:26:03.835 } 00:26:03.835 10:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:03.835 10:25:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:04.096 10:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:04.096 10:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:04.096 10:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:04.356 /dev/nbd0 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:04.356 1+0 records in 00:26:04.356 1+0 records out 00:26:04.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532789 s, 7.7 MB/s 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:04.356 10:25:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:04.356 [2024-12-06 10:25:10.476127] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:04.356 [2024-12-06 10:25:10.476286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80528 ] 00:26:04.615 [2024-12-06 10:25:10.638643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.875 [2024-12-06 10:25:10.783950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.258  [2024-12-06T10:25:13.361Z] Copying: 186/1024 [MB] (186 MBps) [2024-12-06T10:25:14.296Z] Copying: 437/1024 [MB] (251 MBps) [2024-12-06T10:25:15.230Z] Copying: 691/1024 [MB] (254 MBps) [2024-12-06T10:25:15.488Z] Copying: 942/1024 [MB] (250 MBps) [2024-12-06T10:25:16.053Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:26:09.886 00:26:09.886 10:25:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:11.822 10:25:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:11.822 [2024-12-06 10:25:17.938622] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:26:11.822 [2024-12-06 10:25:17.938739] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80604 ] 00:26:12.091 [2024-12-06 10:25:18.093747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.091 [2024-12-06 10:25:18.182220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.465  [2024-12-06T10:25:20.566Z] Copying: 30/1024 [MB] (30 MBps) [2024-12-06T10:25:21.501Z] Copying: 49/1024 [MB] (18 MBps) [2024-12-06T10:25:22.435Z] Copying: 64/1024 [MB] (14 MBps) [2024-12-06T10:25:23.809Z] Copying: 79/1024 [MB] (14 MBps) [2024-12-06T10:25:24.744Z] Copying: 99/1024 [MB] (20 MBps) [2024-12-06T10:25:25.678Z] Copying: 118/1024 [MB] (18 MBps) [2024-12-06T10:25:26.609Z] Copying: 150/1024 [MB] (31 MBps) [2024-12-06T10:25:27.541Z] Copying: 185/1024 [MB] (35 MBps) [2024-12-06T10:25:28.473Z] Copying: 221/1024 [MB] (35 MBps) [2024-12-06T10:25:29.407Z] Copying: 241/1024 [MB] (20 MBps) [2024-12-06T10:25:30.782Z] Copying: 257/1024 [MB] (16 MBps) [2024-12-06T10:25:31.716Z] Copying: 278/1024 [MB] (20 MBps) [2024-12-06T10:25:32.649Z] Copying: 313/1024 [MB] (34 MBps) [2024-12-06T10:25:33.582Z] Copying: 328/1024 [MB] (15 MBps) [2024-12-06T10:25:34.516Z] Copying: 340/1024 [MB] (11 MBps) [2024-12-06T10:25:35.449Z] Copying: 360/1024 [MB] (20 MBps) [2024-12-06T10:25:36.379Z] Copying: 378/1024 [MB] (17 MBps) [2024-12-06T10:25:37.752Z] Copying: 392/1024 [MB] (14 MBps) [2024-12-06T10:25:38.684Z] Copying: 413/1024 [MB] (21 MBps) [2024-12-06T10:25:39.617Z] Copying: 437/1024 [MB] (23 MBps) [2024-12-06T10:25:40.551Z] Copying: 462/1024 [MB] (25 MBps) [2024-12-06T10:25:41.485Z] Copying: 497/1024 [MB] (34 MBps) [2024-12-06T10:25:42.419Z] Copying: 531/1024 [MB] (34 MBps) [2024-12-06T10:25:43.793Z] Copying: 566/1024 [MB] (34 MBps) [2024-12-06T10:25:44.727Z] Copying: 600/1024 [MB] (34 MBps) [2024-12-06T10:25:45.660Z] Copying: 625/1024 [MB] (24 MBps) [2024-12-06T10:25:46.611Z] Copying: 644/1024 [MB] (18 MBps) [2024-12-06T10:25:47.560Z] Copying: 661/1024 [MB] (17 MBps) [2024-12-06T10:25:48.495Z] Copying: 691/1024 [MB] (30 MBps) [2024-12-06T10:25:49.432Z] Copying: 709/1024 [MB] (17 MBps) [2024-12-06T10:25:50.806Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-06T10:25:51.741Z] Copying: 743/1024 [MB] (17 MBps) [2024-12-06T10:25:52.673Z] Copying: 774/1024 [MB] (30 MBps) [2024-12-06T10:25:53.609Z] Copying: 794/1024 [MB] (20 MBps) [2024-12-06T10:25:54.544Z] Copying: 811/1024 [MB] (16 MBps) [2024-12-06T10:25:55.479Z] Copying: 837/1024 [MB] (26 MBps) [2024-12-06T10:25:56.413Z] Copying: 852/1024 [MB] (14 MBps) [2024-12-06T10:25:57.787Z] Copying: 864/1024 [MB] (12 MBps) [2024-12-06T10:25:58.719Z] Copying: 880/1024 [MB] (15 MBps) [2024-12-06T10:25:59.651Z] Copying: 892/1024 [MB] (11 MBps) [2024-12-06T10:26:00.585Z] Copying: 905/1024 [MB] (13 MBps) [2024-12-06T10:26:01.518Z] Copying: 924/1024 [MB] (19 MBps) [2024-12-06T10:26:02.450Z] Copying: 940/1024 [MB] (16 MBps) [2024-12-06T10:26:03.384Z] Copying: 961/1024 [MB] (20 MBps) [2024-12-06T10:26:04.760Z] Copying: 978/1024 [MB] (17 MBps) [2024-12-06T10:26:05.699Z] Copying: 996/1024 [MB] (18 MBps) [2024-12-06T10:26:05.958Z] Copying: 1013/1024 [MB] (17 MBps) [2024-12-06T10:26:06.526Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:27:00.359 00:27:00.620 10:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:00.620 10:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:00.620 10:26:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:00.882 [2024-12-06 10:26:06.878380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.878424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:00.882 [2024-12-06 10:26:06.878434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:00.882 [2024-12-06 10:26:06.878444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.878471] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:00.882 [2024-12-06 10:26:06.880597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.880623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:00.882 [2024-12-06 10:26:06.880634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:27:00.882 [2024-12-06 10:26:06.880640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.882729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.882758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:00.882 [2024-12-06 10:26:06.882767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.065 ms 00:27:00.882 [2024-12-06 10:26:06.882774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.898049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.898079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:00.882 [2024-12-06 10:26:06.898089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.256 ms 00:27:00.882 [2024-12-06 10:26:06.898096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.902935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.902961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:00.882 [2024-12-06 10:26:06.902971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.809 ms 00:27:00.882 [2024-12-06 10:26:06.902979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.922028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.922056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:00.882 [2024-12-06 10:26:06.922065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.008 ms 00:27:00.882 [2024-12-06 10:26:06.922071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.935204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.935233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:00.882 [2024-12-06 10:26:06.935247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:27:00.882 [2024-12-06 10:26:06.935253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.935369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.935378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:00.882 [2024-12-06 10:26:06.935386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:00.882 [2024-12-06 10:26:06.935395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.954165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.954192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:00.882 [2024-12-06 10:26:06.954201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.754 ms 00:27:00.882 [2024-12-06 10:26:06.954207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.972242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.972270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:00.882 [2024-12-06 10:26:06.972279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.004 ms 00:27:00.882 [2024-12-06 10:26:06.972285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:06.990240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:06.990268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:00.882 [2024-12-06 10:26:06.990277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.922 ms 00:27:00.882 [2024-12-06 10:26:06.990283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:07.008071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.882 [2024-12-06 10:26:07.008098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:00.882 [2024-12-06 10:26:07.008107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.725 ms 00:27:00.882 [2024-12-06 10:26:07.008113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.882 [2024-12-06 10:26:07.008142] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:00.882 [2024-12-06 10:26:07.008153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:00.882 [2024-12-06 10:26:07.008339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:00.883 [2024-12-06 10:26:07.008854] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:00.883 [2024-12-06 10:26:07.008862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e267339-64bd-4d3b-92f1-986b0822465a 00:27:00.883 [2024-12-06 10:26:07.008868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:00.883 [2024-12-06 10:26:07.008877] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:00.883 [2024-12-06 10:26:07.008884] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:00.883 [2024-12-06 10:26:07.008892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:00.883 [2024-12-06 10:26:07.008897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:00.883 [2024-12-06 10:26:07.008904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:00.883 [2024-12-06 10:26:07.008910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:00.883 [2024-12-06 10:26:07.008916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:00.883 [2024-12-06 10:26:07.008921] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:00.883 [2024-12-06 10:26:07.008927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.883 [2024-12-06 10:26:07.008933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:00.883 [2024-12-06 10:26:07.008943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:27:00.883 [2024-12-06 10:26:07.008948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.883 [2024-12-06 10:26:07.018337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.883 [2024-12-06 10:26:07.018361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:00.883 [2024-12-06 10:26:07.018370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.363 ms 00:27:00.883 [2024-12-06 10:26:07.018376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:00.883 [2024-12-06 10:26:07.018651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:00.883 [2024-12-06 10:26:07.018658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:00.884 [2024-12-06 10:26:07.018666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:27:00.884 [2024-12-06 10:26:07.018672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.051903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.051934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.144 [2024-12-06 10:26:07.051944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.051950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.051997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.052004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.144 [2024-12-06 10:26:07.052012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.052018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.052098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.052109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.144 [2024-12-06 10:26:07.052117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.052123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.052139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.052145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.144 [2024-12-06 10:26:07.052152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.052167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.111319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.111353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.144 [2024-12-06 10:26:07.111362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.111368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.160039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.160071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.144 [2024-12-06 10:26:07.160081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.160087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.160146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.160154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.144 [2024-12-06 10:26:07.160172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.160178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.160228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.144 [2024-12-06 10:26:07.160236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.144 [2024-12-06 10:26:07.160243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.144 [2024-12-06 10:26:07.160249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.144 [2024-12-06 10:26:07.160319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.145 [2024-12-06 10:26:07.160327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.145 [2024-12-06 10:26:07.160336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.145 [2024-12-06 10:26:07.160342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.145 [2024-12-06 10:26:07.160368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.145 [2024-12-06 10:26:07.160374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:01.145 [2024-12-06 10:26:07.160381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.145 [2024-12-06 10:26:07.160387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.145 [2024-12-06 10:26:07.160416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.145 [2024-12-06 10:26:07.160423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.145 [2024-12-06 10:26:07.160431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.145 [2024-12-06 10:26:07.160438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.145 [2024-12-06 10:26:07.160492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.145 [2024-12-06 10:26:07.160501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.145 [2024-12-06 10:26:07.160509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.145 [2024-12-06 10:26:07.160515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.145 [2024-12-06 10:26:07.160617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.208 ms, result 0 00:27:01.145 true 00:27:01.145 10:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80386 00:27:01.145 10:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80386 00:27:01.145 10:26:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:01.145 [2024-12-06 10:26:07.256391] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:27:01.145 [2024-12-06 10:26:07.256525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81116 ] 00:27:01.407 [2024-12-06 10:26:07.410721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.407 [2024-12-06 10:26:07.492481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.792  [2024-12-06T10:26:09.902Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-06T10:26:10.845Z] Copying: 511/1024 [MB] (256 MBps) [2024-12-06T10:26:11.789Z] Copying: 765/1024 [MB] (254 MBps) [2024-12-06T10:26:11.789Z] Copying: 1014/1024 [MB] (249 MBps) [2024-12-06T10:26:12.361Z] Copying: 1024/1024 [MB] (average 253 MBps) 00:27:06.194 00:27:06.194 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80386 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:06.194 10:26:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:06.194 [2024-12-06 10:26:12.351350] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:27:06.194 [2024-12-06 10:26:12.351477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81174 ] 00:27:06.455 [2024-12-06 10:26:12.506599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.455 [2024-12-06 10:26:12.589997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.716 [2024-12-06 10:26:12.801734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.716 [2024-12-06 10:26:12.801782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.716 [2024-12-06 10:26:12.864460] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:06.716 [2024-12-06 10:26:12.864730] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:06.716 [2024-12-06 10:26:12.865414] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:07.291 [2024-12-06 10:26:13.249455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.291 [2024-12-06 10:26:13.249486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:07.291 [2024-12-06 10:26:13.249495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.291 [2024-12-06 10:26:13.249503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.291 [2024-12-06 10:26:13.249540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.291 [2024-12-06 10:26:13.249549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.291 [2024-12-06 10:26:13.249555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:07.292 [2024-12-06 10:26:13.249561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.249573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:07.292 [2024-12-06 10:26:13.250081] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:07.292 [2024-12-06 10:26:13.250094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.250099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.292 [2024-12-06 10:26:13.250107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:27:07.292 [2024-12-06 10:26:13.250112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.251044] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:07.292 [2024-12-06 10:26:13.260922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.260948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:07.292 [2024-12-06 10:26:13.260957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.879 ms 00:27:07.292 [2024-12-06 10:26:13.260963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.261008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.261015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:07.292 [2024-12-06 10:26:13.261022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:07.292 [2024-12-06 10:26:13.261028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.265494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.265513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.292 [2024-12-06 10:26:13.265521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.428 ms 00:27:07.292 [2024-12-06 10:26:13.265526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.265582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.265588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.292 [2024-12-06 10:26:13.265595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:07.292 [2024-12-06 10:26:13.265600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.265643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.265651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:07.292 [2024-12-06 10:26:13.265657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:07.292 [2024-12-06 10:26:13.265663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.265678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:07.292 [2024-12-06 10:26:13.268351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.268371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.292 [2024-12-06 10:26:13.268378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:27:07.292 [2024-12-06 10:26:13.268384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.268411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.268418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:07.292 [2024-12-06 10:26:13.268424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:07.292 [2024-12-06 10:26:13.268430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.268454] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:07.292 [2024-12-06 10:26:13.268469] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:07.292 [2024-12-06 10:26:13.268496] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:07.292 [2024-12-06 10:26:13.268508] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:07.292 [2024-12-06 10:26:13.268588] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:07.292 [2024-12-06 10:26:13.268596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:07.292 [2024-12-06 10:26:13.268604] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:07.292 [2024-12-06 10:26:13.268615] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268623] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268629] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:07.292 [2024-12-06 10:26:13.268634] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:07.292 [2024-12-06 10:26:13.268640] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:07.292 [2024-12-06 10:26:13.268646] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:07.292 [2024-12-06 10:26:13.268652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.268659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:07.292 [2024-12-06 10:26:13.268666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:27:07.292 [2024-12-06 10:26:13.268671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.268734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.292 [2024-12-06 10:26:13.268743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:07.292 [2024-12-06 10:26:13.268749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:07.292 [2024-12-06 10:26:13.268754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.292 [2024-12-06 10:26:13.268829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:07.292 [2024-12-06 10:26:13.268837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:07.292 [2024-12-06 10:26:13.268843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:07.292 [2024-12-06 10:26:13.268861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:07.292 [2024-12-06 10:26:13.268879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.292 [2024-12-06 10:26:13.268894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:07.292 [2024-12-06 10:26:13.268901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:07.292 [2024-12-06 10:26:13.268906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.292 [2024-12-06 10:26:13.268911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:07.292 [2024-12-06 10:26:13.268916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:07.292 [2024-12-06 10:26:13.268921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:07.292 [2024-12-06 10:26:13.268933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:07.292 [2024-12-06 10:26:13.268948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:07.292 [2024-12-06 10:26:13.268963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:07.292 [2024-12-06 10:26:13.268978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.292 [2024-12-06 10:26:13.268987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:07.292 [2024-12-06 10:26:13.268992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:07.292 [2024-12-06 10:26:13.268997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.292 [2024-12-06 10:26:13.269001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:07.292 [2024-12-06 10:26:13.269006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:07.292 [2024-12-06 10:26:13.269011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.292 [2024-12-06 10:26:13.269016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:07.292 [2024-12-06 10:26:13.269021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:07.292 [2024-12-06 10:26:13.269025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.292 [2024-12-06 10:26:13.269031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:07.292 [2024-12-06 10:26:13.269036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:07.292 [2024-12-06 10:26:13.269040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.269045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:07.292 [2024-12-06 10:26:13.269050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:07.292 [2024-12-06 10:26:13.269055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.292 [2024-12-06 10:26:13.269060] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:07.292 [2024-12-06 10:26:13.269067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:07.293 [2024-12-06 10:26:13.269074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.293 [2024-12-06 10:26:13.269080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.293 [2024-12-06 10:26:13.269086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:07.293 [2024-12-06 10:26:13.269091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:07.293 [2024-12-06 10:26:13.269096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:07.293 [2024-12-06 10:26:13.269101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:07.293 [2024-12-06 10:26:13.269106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:07.293 [2024-12-06 10:26:13.269111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:07.293 [2024-12-06 10:26:13.269117] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:07.293 [2024-12-06 10:26:13.269124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:07.293 [2024-12-06 10:26:13.269136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:07.293 [2024-12-06 10:26:13.269142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:07.293 [2024-12-06 10:26:13.269148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:07.293 [2024-12-06 10:26:13.269153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:07.293 [2024-12-06 10:26:13.269158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:07.293 [2024-12-06 10:26:13.269163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:07.293 [2024-12-06 10:26:13.269168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:07.293 [2024-12-06 10:26:13.269173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:07.293 [2024-12-06 10:26:13.269179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:07.293 [2024-12-06 10:26:13.269206] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:07.293 [2024-12-06 10:26:13.269212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:07.293 [2024-12-06 10:26:13.269223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:07.293 [2024-12-06 10:26:13.269229] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:07.293 [2024-12-06 10:26:13.269234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:07.293 [2024-12-06 10:26:13.269242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.269248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:07.293 [2024-12-06 10:26:13.269254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:27:07.293 [2024-12-06 10:26:13.269259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.290399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.290426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.293 [2024-12-06 10:26:13.290435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.107 ms 00:27:07.293 [2024-12-06 10:26:13.290441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.290518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.290524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:07.293 [2024-12-06 10:26:13.290530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:07.293 [2024-12-06 10:26:13.290536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.327228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.327261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.293 [2024-12-06 10:26:13.327273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.652 ms 00:27:07.293 [2024-12-06 10:26:13.327279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.327317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.327325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.293 [2024-12-06 10:26:13.327331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:07.293 [2024-12-06 10:26:13.327337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.327691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.327705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.293 [2024-12-06 10:26:13.327713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:27:07.293 [2024-12-06 10:26:13.327722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.327823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.327830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.293 [2024-12-06 10:26:13.327836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:07.293 [2024-12-06 10:26:13.327842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.338629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.338775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.293 [2024-12-06 10:26:13.338788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.771 ms 00:27:07.293 [2024-12-06 10:26:13.338794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.349151] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:07.293 [2024-12-06 10:26:13.349179] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:07.293 [2024-12-06 10:26:13.349189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.349195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:07.293 [2024-12-06 10:26:13.349202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.318 ms 00:27:07.293 [2024-12-06 10:26:13.349208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.367741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.367768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:07.293 [2024-12-06 10:26:13.367777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.502 ms 00:27:07.293 [2024-12-06 10:26:13.367785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.376736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.376761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:07.293 [2024-12-06 10:26:13.376769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.920 ms 00:27:07.293 [2024-12-06 10:26:13.376775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.385854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.385883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:07.293 [2024-12-06 10:26:13.385891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.051 ms 00:27:07.293 [2024-12-06 10:26:13.385897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.386384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.386411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:07.293 [2024-12-06 10:26:13.386418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:27:07.293 [2024-12-06 10:26:13.386424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.431427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.431477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:07.293 [2024-12-06 10:26:13.431488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.988 ms 00:27:07.293 [2024-12-06 10:26:13.431494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.439726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:07.293 [2024-12-06 10:26:13.441891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.441911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:07.293 [2024-12-06 10:26:13.441920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.352 ms 00:27:07.293 [2024-12-06 10:26:13.441931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.441997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.442006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:07.293 [2024-12-06 10:26:13.442012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:07.293 [2024-12-06 10:26:13.442018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.442082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.293 [2024-12-06 10:26:13.442091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:07.293 [2024-12-06 10:26:13.442097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:07.293 [2024-12-06 10:26:13.442103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.293 [2024-12-06 10:26:13.442120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.294 [2024-12-06 10:26:13.442127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:07.294 [2024-12-06 10:26:13.442133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:07.294 [2024-12-06 10:26:13.442139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.294 [2024-12-06 10:26:13.442164] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:07.294 [2024-12-06 10:26:13.442171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.294 [2024-12-06 10:26:13.442177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:07.294 [2024-12-06 10:26:13.442183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:07.294 [2024-12-06 10:26:13.442191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.599 [2024-12-06 10:26:13.460089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.599 [2024-12-06 10:26:13.460219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:07.599 [2024-12-06 10:26:13.460266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:27:07.599 [2024-12-06 10:26:13.460285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.599 [2024-12-06 10:26:13.460346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.599 [2024-12-06 10:26:13.460370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:07.599 [2024-12-06 10:26:13.460387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:07.599 [2024-12-06 10:26:13.460402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.599 [2024-12-06 10:26:13.461725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.925 ms, result 0 00:27:08.567  [2024-12-06T10:26:15.676Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-06T10:26:16.616Z] Copying: 43/1024 [MB] (21 MBps) [2024-12-06T10:26:17.561Z] Copying: 84/1024 [MB] (40 MBps) [2024-12-06T10:26:18.507Z] Copying: 107/1024 [MB] (22 MBps) [2024-12-06T10:26:19.893Z] Copying: 121/1024 [MB] (13 MBps) [2024-12-06T10:26:20.837Z] Copying: 143/1024 [MB] (22 MBps) [2024-12-06T10:26:21.776Z] Copying: 158/1024 [MB] (15 MBps) [2024-12-06T10:26:22.712Z] Copying: 179/1024 [MB] (20 MBps) [2024-12-06T10:26:23.654Z] Copying: 203/1024 [MB] (23 MBps) [2024-12-06T10:26:24.597Z] Copying: 228/1024 [MB] (25 MBps) [2024-12-06T10:26:25.540Z] Copying: 252/1024 [MB] (23 MBps) [2024-12-06T10:26:26.493Z] Copying: 281/1024 [MB] (28 MBps) [2024-12-06T10:26:27.880Z] Copying: 309/1024 [MB] (28 MBps) [2024-12-06T10:26:28.824Z] Copying: 324/1024 [MB] (15 MBps) [2024-12-06T10:26:29.766Z] Copying: 349/1024 [MB] (24 MBps) [2024-12-06T10:26:30.710Z] Copying: 393/1024 [MB] (44 MBps) [2024-12-06T10:26:31.654Z] Copying: 416/1024 [MB] (22 MBps) [2024-12-06T10:26:32.599Z] Copying: 426/1024 [MB] (10 MBps) [2024-12-06T10:26:33.544Z] Copying: 465/1024 [MB] (38 MBps) [2024-12-06T10:26:34.489Z] Copying: 495/1024 [MB] (30 MBps) [2024-12-06T10:26:35.874Z] Copying: 526/1024 [MB] (30 MBps) [2024-12-06T10:26:36.817Z] Copying: 568/1024 [MB] (42 MBps) [2024-12-06T10:26:37.762Z] Copying: 590/1024 [MB] (22 MBps) [2024-12-06T10:26:38.707Z] Copying: 619/1024 [MB] (28 MBps) [2024-12-06T10:26:39.652Z] Copying: 643/1024 [MB] (24 MBps) [2024-12-06T10:26:40.597Z] Copying: 680/1024 [MB] (37 MBps) [2024-12-06T10:26:41.542Z] Copying: 710/1024 [MB] (29 MBps) [2024-12-06T10:26:42.489Z] Copying: 757/1024 [MB] (46 MBps) [2024-12-06T10:26:43.507Z] Copying: 789/1024 [MB] (32 MBps) [2024-12-06T10:26:44.898Z] Copying: 809/1024 [MB] (19 MBps) [2024-12-06T10:26:45.843Z] Copying: 831/1024 [MB] (21 MBps) [2024-12-06T10:26:46.808Z] Copying: 854/1024 [MB] (23 MBps) [2024-12-06T10:26:47.809Z] Copying: 869/1024 [MB] (14 MBps) [2024-12-06T10:26:48.750Z] Copying: 890/1024 [MB] (20 MBps) [2024-12-06T10:26:49.689Z] Copying: 910/1024 [MB] (19 MBps) [2024-12-06T10:26:50.631Z] Copying: 928/1024 [MB] (18 MBps) [2024-12-06T10:26:51.575Z] Copying: 939/1024 [MB] (10 MBps) [2024-12-06T10:26:52.521Z] Copying: 972032/1048576 [kB] (10216 kBps) [2024-12-06T10:26:53.911Z] Copying: 959/1024 [MB] (10 MBps) [2024-12-06T10:26:54.484Z] Copying: 969/1024 [MB] (10 MBps) [2024-12-06T10:26:55.870Z] Copying: 979/1024 [MB] (10 MBps) [2024-12-06T10:26:56.811Z] Copying: 990/1024 [MB] (10 MBps) [2024-12-06T10:26:57.756Z] Copying: 1007/1024 [MB] (17 MBps) [2024-12-06T10:26:58.017Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-06T10:26:58.017Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-06 10:26:57.875977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.876031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:51.850 [2024-12-06 10:26:57.876046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:51.850 [2024-12-06 10:26:57.876055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.876077] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:51.850 [2024-12-06 10:26:57.878747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.878780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:51.850 [2024-12-06 10:26:57.878790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:27:51.850 [2024-12-06 10:26:57.878803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.890033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.890189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:51.850 [2024-12-06 10:26:57.890208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.495 ms 00:27:51.850 [2024-12-06 10:26:57.890217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.916829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.916985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:51.850 [2024-12-06 10:26:57.917005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.589 ms 00:27:51.850 [2024-12-06 10:26:57.917015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.923222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.923255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:51.850 [2024-12-06 10:26:57.923265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:27:51.850 [2024-12-06 10:26:57.923273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.947890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.947926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:51.850 [2024-12-06 10:26:57.947936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.579 ms 00:27:51.850 [2024-12-06 10:26:57.947944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:51.850 [2024-12-06 10:26:57.962633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:51.850 [2024-12-06 10:26:57.962768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:51.850 [2024-12-06 10:26:57.962786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.654 ms 00:27:51.850 [2024-12-06 10:26:57.962794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.110 [2024-12-06 10:26:58.244199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.110 [2024-12-06 10:26:58.244245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:52.110 [2024-12-06 10:26:58.244265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 281.369 ms 00:27:52.110 [2024-12-06 10:26:58.244273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.110 [2024-12-06 10:26:58.269485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.110 [2024-12-06 10:26:58.269674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:52.110 [2024-12-06 10:26:58.269693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.197 ms 00:27:52.110 [2024-12-06 10:26:58.269714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.372 [2024-12-06 10:26:58.295749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.372 [2024-12-06 10:26:58.295815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:52.372 [2024-12-06 10:26:58.295831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.711 ms 00:27:52.372 [2024-12-06 10:26:58.295839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.372 [2024-12-06 10:26:58.321113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.372 [2024-12-06 10:26:58.321339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:52.372 [2024-12-06 10:26:58.321362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.218 ms 00:27:52.372 [2024-12-06 10:26:58.321370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.372 [2024-12-06 10:26:58.347048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.372 [2024-12-06 10:26:58.347101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:52.372 [2024-12-06 10:26:58.347115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.563 ms 00:27:52.372 [2024-12-06 10:26:58.347122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.372 [2024-12-06 10:26:58.347172] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:52.372 [2024-12-06 10:26:58.347189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111616 / 261120 wr_cnt: 1 state: open 00:27:52.372 [2024-12-06 10:26:58.347200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:52.372 [2024-12-06 10:26:58.347887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.347993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:52.373 [2024-12-06 10:26:58.348057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:52.373 [2024-12-06 10:26:58.348065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e267339-64bd-4d3b-92f1-986b0822465a 00:27:52.373 [2024-12-06 10:26:58.348086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111616 00:27:52.373 [2024-12-06 10:26:58.348100] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112576 00:27:52.373 [2024-12-06 10:26:58.348107] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111616 00:27:52.373 [2024-12-06 10:26:58.348117] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:27:52.373 [2024-12-06 10:26:58.348124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:52.373 [2024-12-06 10:26:58.348132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:52.373 [2024-12-06 10:26:58.348140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:52.373 [2024-12-06 10:26:58.348147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:52.373 [2024-12-06 10:26:58.348154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:52.373 [2024-12-06 10:26:58.348162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.373 [2024-12-06 10:26:58.348199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:52.373 [2024-12-06 10:26:58.348209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:27:52.373 [2024-12-06 10:26:58.348217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.362000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.373 [2024-12-06 10:26:58.362049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:52.373 [2024-12-06 10:26:58.362061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.746 ms 00:27:52.373 [2024-12-06 10:26:58.362070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.362487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:52.373 [2024-12-06 10:26:58.362499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:52.373 [2024-12-06 10:26:58.362516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:27:52.373 [2024-12-06 10:26:58.362525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.399142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.373 [2024-12-06 10:26:58.399198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:52.373 [2024-12-06 10:26:58.399211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.373 [2024-12-06 10:26:58.399219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.399284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.373 [2024-12-06 10:26:58.399293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:52.373 [2024-12-06 10:26:58.399309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.373 [2024-12-06 10:26:58.399317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.399400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.373 [2024-12-06 10:26:58.399411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:52.373 [2024-12-06 10:26:58.399419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.373 [2024-12-06 10:26:58.399427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.399442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.373 [2024-12-06 10:26:58.399477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:52.373 [2024-12-06 10:26:58.399486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.373 [2024-12-06 10:26:58.399494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.373 [2024-12-06 10:26:58.484699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.373 [2024-12-06 10:26:58.484760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:52.373 [2024-12-06 10:26:58.484775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.373 [2024-12-06 10:26:58.484784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:52.634 [2024-12-06 10:26:58.554255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:52.634 [2024-12-06 10:26:58.554360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:52.634 [2024-12-06 10:26:58.554487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:52.634 [2024-12-06 10:26:58.554631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:52.634 [2024-12-06 10:26:58.554697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:52.634 [2024-12-06 10:26:58.554769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:52.634 [2024-12-06 10:26:58.554841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:52.634 [2024-12-06 10:26:58.554850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:52.634 [2024-12-06 10:26:58.554858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:52.634 [2024-12-06 10:26:58.554994] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 679.002 ms, result 0 00:27:54.016 00:27:54.017 00:27:54.017 10:26:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:55.927 10:27:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:55.927 [2024-12-06 10:27:01.838133] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:27:55.927 [2024-12-06 10:27:01.838269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81674 ] 00:27:55.927 [2024-12-06 10:27:02.003170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.187 [2024-12-06 10:27:02.100360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.448 [2024-12-06 10:27:02.373322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.448 [2024-12-06 10:27:02.373421] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:56.448 [2024-12-06 10:27:02.536550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.448 [2024-12-06 10:27:02.536813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:56.448 [2024-12-06 10:27:02.536841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:56.448 [2024-12-06 10:27:02.536851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.448 [2024-12-06 10:27:02.536929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.448 [2024-12-06 10:27:02.536943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:56.448 [2024-12-06 10:27:02.536953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:56.448 [2024-12-06 10:27:02.536961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.448 [2024-12-06 10:27:02.536984] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:56.448 [2024-12-06 10:27:02.537758] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:56.448 [2024-12-06 10:27:02.537780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.448 [2024-12-06 10:27:02.537790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:56.448 [2024-12-06 10:27:02.537800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:27:56.449 [2024-12-06 10:27:02.537807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.539572] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:56.449 [2024-12-06 10:27:02.554066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.554123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:56.449 [2024-12-06 10:27:02.554137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:27:56.449 [2024-12-06 10:27:02.554146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.554238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.554250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:56.449 [2024-12-06 10:27:02.554259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:56.449 [2024-12-06 10:27:02.554267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.562800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.563008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:56.449 [2024-12-06 10:27:02.563028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.448 ms 00:27:56.449 [2024-12-06 10:27:02.563046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.563133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.563143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:56.449 [2024-12-06 10:27:02.563152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:56.449 [2024-12-06 10:27:02.563160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.563210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.563220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:56.449 [2024-12-06 10:27:02.563229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:56.449 [2024-12-06 10:27:02.563237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.563264] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:56.449 [2024-12-06 10:27:02.567317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.567366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:56.449 [2024-12-06 10:27:02.567381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:27:56.449 [2024-12-06 10:27:02.567389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.567432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.567441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:56.449 [2024-12-06 10:27:02.567468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:56.449 [2024-12-06 10:27:02.567476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.567533] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:56.449 [2024-12-06 10:27:02.567559] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:56.449 [2024-12-06 10:27:02.567598] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:56.449 [2024-12-06 10:27:02.567619] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:56.449 [2024-12-06 10:27:02.567728] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:56.449 [2024-12-06 10:27:02.567741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:56.449 [2024-12-06 10:27:02.567752] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:56.449 [2024-12-06 10:27:02.567764] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:56.449 [2024-12-06 10:27:02.567773] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:56.449 [2024-12-06 10:27:02.567781] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:56.449 [2024-12-06 10:27:02.567789] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:56.449 [2024-12-06 10:27:02.567800] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:56.449 [2024-12-06 10:27:02.567808] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:56.449 [2024-12-06 10:27:02.567816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.567824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:56.449 [2024-12-06 10:27:02.567832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:27:56.449 [2024-12-06 10:27:02.567839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.567927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.449 [2024-12-06 10:27:02.567937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:56.449 [2024-12-06 10:27:02.567945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:56.449 [2024-12-06 10:27:02.567953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.449 [2024-12-06 10:27:02.568061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:56.449 [2024-12-06 10:27:02.568073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:56.449 [2024-12-06 10:27:02.568082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:56.449 [2024-12-06 10:27:02.568106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:56.449 [2024-12-06 10:27:02.568128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.449 [2024-12-06 10:27:02.568143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:56.449 [2024-12-06 10:27:02.568150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:56.449 [2024-12-06 10:27:02.568157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:56.449 [2024-12-06 10:27:02.568188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:56.449 [2024-12-06 10:27:02.568196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:56.449 [2024-12-06 10:27:02.568206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:56.449 [2024-12-06 10:27:02.568221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:56.449 [2024-12-06 10:27:02.568246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:56.449 [2024-12-06 10:27:02.568268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:56.449 [2024-12-06 10:27:02.568291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:56.449 [2024-12-06 10:27:02.568314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:56.449 [2024-12-06 10:27:02.568337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.449 [2024-12-06 10:27:02.568351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:56.449 [2024-12-06 10:27:02.568358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:56.449 [2024-12-06 10:27:02.568366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:56.449 [2024-12-06 10:27:02.568373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:56.449 [2024-12-06 10:27:02.568380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:56.449 [2024-12-06 10:27:02.568387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:56.449 [2024-12-06 10:27:02.568401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:56.449 [2024-12-06 10:27:02.568408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:56.449 [2024-12-06 10:27:02.568423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:56.449 [2024-12-06 10:27:02.568430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:56.449 [2024-12-06 10:27:02.568462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:56.449 [2024-12-06 10:27:02.568470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:56.449 [2024-12-06 10:27:02.568477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:56.449 [2024-12-06 10:27:02.568484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:56.449 [2024-12-06 10:27:02.568492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:56.449 [2024-12-06 10:27:02.568499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:56.449 [2024-12-06 10:27:02.568508] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:56.449 [2024-12-06 10:27:02.568519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.449 [2024-12-06 10:27:02.568531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:56.449 [2024-12-06 10:27:02.568539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:56.450 [2024-12-06 10:27:02.568548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:56.450 [2024-12-06 10:27:02.568556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:56.450 [2024-12-06 10:27:02.568564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:56.450 [2024-12-06 10:27:02.568571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:56.450 [2024-12-06 10:27:02.568579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:56.450 [2024-12-06 10:27:02.568587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:56.450 [2024-12-06 10:27:02.568596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:56.450 [2024-12-06 10:27:02.568603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:56.450 [2024-12-06 10:27:02.568643] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:56.450 [2024-12-06 10:27:02.568652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:56.450 [2024-12-06 10:27:02.568671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:56.450 [2024-12-06 10:27:02.568679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:56.450 [2024-12-06 10:27:02.568689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:56.450 [2024-12-06 10:27:02.568697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.450 [2024-12-06 10:27:02.568706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:56.450 [2024-12-06 10:27:02.568715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:27:56.450 [2024-12-06 10:27:02.568722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.450 [2024-12-06 10:27:02.601291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.450 [2024-12-06 10:27:02.601348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:56.450 [2024-12-06 10:27:02.601360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.520 ms 00:27:56.450 [2024-12-06 10:27:02.601372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.450 [2024-12-06 10:27:02.601489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.450 [2024-12-06 10:27:02.601500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:56.450 [2024-12-06 10:27:02.601509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:56.450 [2024-12-06 10:27:02.601517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.649951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.650012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:56.711 [2024-12-06 10:27:02.650028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.368 ms 00:27:56.711 [2024-12-06 10:27:02.650038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.650090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.650101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:56.711 [2024-12-06 10:27:02.650115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:56.711 [2024-12-06 10:27:02.650124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.650808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.651118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:56.711 [2024-12-06 10:27:02.651147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:27:56.711 [2024-12-06 10:27:02.651156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.651328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.651339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:56.711 [2024-12-06 10:27:02.651355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:56.711 [2024-12-06 10:27:02.651363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.667517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.667571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:56.711 [2024-12-06 10:27:02.667583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.133 ms 00:27:56.711 [2024-12-06 10:27:02.667592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.682417] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:56.711 [2024-12-06 10:27:02.682641] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:56.711 [2024-12-06 10:27:02.682663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.682674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:56.711 [2024-12-06 10:27:02.682684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.952 ms 00:27:56.711 [2024-12-06 10:27:02.682691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.708917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.708976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:56.711 [2024-12-06 10:27:02.708990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.056 ms 00:27:56.711 [2024-12-06 10:27:02.708998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.722233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.722283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:56.711 [2024-12-06 10:27:02.722296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.174 ms 00:27:56.711 [2024-12-06 10:27:02.722304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.735231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.735282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:56.711 [2024-12-06 10:27:02.735296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.874 ms 00:27:56.711 [2024-12-06 10:27:02.735303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.735968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.735997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:56.711 [2024-12-06 10:27:02.736011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:27:56.711 [2024-12-06 10:27:02.736019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.804354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.804431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:56.711 [2024-12-06 10:27:02.804481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.313 ms 00:27:56.711 [2024-12-06 10:27:02.804491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.816610] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:56.711 [2024-12-06 10:27:02.819990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.820201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:56.711 [2024-12-06 10:27:02.820224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.435 ms 00:27:56.711 [2024-12-06 10:27:02.820234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.820331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.820343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:56.711 [2024-12-06 10:27:02.820358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:56.711 [2024-12-06 10:27:02.820366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.822241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.822297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:56.711 [2024-12-06 10:27:02.822311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.834 ms 00:27:56.711 [2024-12-06 10:27:02.822319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.822351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.822360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:56.711 [2024-12-06 10:27:02.822370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:56.711 [2024-12-06 10:27:02.822379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.822427] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:56.711 [2024-12-06 10:27:02.822440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.822471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:56.711 [2024-12-06 10:27:02.822481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:56.711 [2024-12-06 10:27:02.822489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.849223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.849416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:56.711 [2024-12-06 10:27:02.849518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.713 ms 00:27:56.711 [2024-12-06 10:27:02.849544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.849633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:56.711 [2024-12-06 10:27:02.849669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:56.711 [2024-12-06 10:27:02.849698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:56.711 [2024-12-06 10:27:02.849718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:56.711 [2024-12-06 10:27:02.851033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.985 ms, result 0 00:27:58.098  [2024-12-06T10:27:05.208Z] Copying: 1084/1048576 [kB] (1084 kBps) [2024-12-06T10:27:06.150Z] Copying: 4736/1048576 [kB] (3652 kBps) [2024-12-06T10:27:07.092Z] Copying: 19/1024 [MB] (14 MBps) [2024-12-06T10:27:08.039Z] Copying: 44/1024 [MB] (25 MBps) [2024-12-06T10:27:09.423Z] Copying: 61/1024 [MB] (16 MBps) [2024-12-06T10:27:10.366Z] Copying: 92/1024 [MB] (31 MBps) [2024-12-06T10:27:11.324Z] Copying: 122/1024 [MB] (30 MBps) [2024-12-06T10:27:12.309Z] Copying: 148/1024 [MB] (25 MBps) [2024-12-06T10:27:13.253Z] Copying: 171/1024 [MB] (23 MBps) [2024-12-06T10:27:14.206Z] Copying: 198/1024 [MB] (26 MBps) [2024-12-06T10:27:15.155Z] Copying: 219/1024 [MB] (21 MBps) [2024-12-06T10:27:16.108Z] Copying: 246/1024 [MB] (27 MBps) [2024-12-06T10:27:17.052Z] Copying: 268/1024 [MB] (21 MBps) [2024-12-06T10:27:18.096Z] Copying: 296/1024 [MB] (28 MBps) [2024-12-06T10:27:19.041Z] Copying: 318/1024 [MB] (22 MBps) [2024-12-06T10:27:20.428Z] Copying: 336/1024 [MB] (18 MBps) [2024-12-06T10:27:21.372Z] Copying: 360/1024 [MB] (23 MBps) [2024-12-06T10:27:22.313Z] Copying: 378/1024 [MB] (18 MBps) [2024-12-06T10:27:23.256Z] Copying: 393/1024 [MB] (15 MBps) [2024-12-06T10:27:24.201Z] Copying: 409/1024 [MB] (15 MBps) [2024-12-06T10:27:25.145Z] Copying: 425/1024 [MB] (16 MBps) [2024-12-06T10:27:26.085Z] Copying: 447/1024 [MB] (21 MBps) [2024-12-06T10:27:27.471Z] Copying: 463/1024 [MB] (15 MBps) [2024-12-06T10:27:28.043Z] Copying: 482/1024 [MB] (19 MBps) [2024-12-06T10:27:29.429Z] Copying: 503/1024 [MB] (21 MBps) [2024-12-06T10:27:30.374Z] Copying: 537/1024 [MB] (34 MBps) [2024-12-06T10:27:31.318Z] Copying: 563/1024 [MB] (25 MBps) [2024-12-06T10:27:32.262Z] Copying: 597/1024 [MB] (34 MBps) [2024-12-06T10:27:33.229Z] Copying: 614/1024 [MB] (16 MBps) [2024-12-06T10:27:34.170Z] Copying: 630/1024 [MB] (16 MBps) [2024-12-06T10:27:35.112Z] Copying: 649/1024 [MB] (18 MBps) [2024-12-06T10:27:36.050Z] Copying: 674/1024 [MB] (24 MBps) [2024-12-06T10:27:37.452Z] Copying: 690/1024 [MB] (16 MBps) [2024-12-06T10:27:38.399Z] Copying: 717/1024 [MB] (26 MBps) [2024-12-06T10:27:39.342Z] Copying: 740/1024 [MB] (23 MBps) [2024-12-06T10:27:40.285Z] Copying: 764/1024 [MB] (23 MBps) [2024-12-06T10:27:41.230Z] Copying: 781/1024 [MB] (17 MBps) [2024-12-06T10:27:42.171Z] Copying: 808/1024 [MB] (26 MBps) [2024-12-06T10:27:43.114Z] Copying: 834/1024 [MB] (26 MBps) [2024-12-06T10:27:44.059Z] Copying: 861/1024 [MB] (26 MBps) [2024-12-06T10:27:45.448Z] Copying: 889/1024 [MB] (28 MBps) [2024-12-06T10:27:46.388Z] Copying: 918/1024 [MB] (28 MBps) [2024-12-06T10:27:47.328Z] Copying: 946/1024 [MB] (28 MBps) [2024-12-06T10:27:48.271Z] Copying: 978/1024 [MB] (31 MBps) [2024-12-06T10:27:49.214Z] Copying: 997/1024 [MB] (19 MBps) [2024-12-06T10:27:49.475Z] Copying: 1019/1024 [MB] (22 MBps) [2024-12-06T10:27:50.048Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-12-06 10:27:49.836324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.836427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:43.881 [2024-12-06 10:27:49.836460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:43.881 [2024-12-06 10:27:49.836472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.836503] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:43.881 [2024-12-06 10:27:49.840852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.840910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:43.881 [2024-12-06 10:27:49.840927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.328 ms 00:28:43.881 [2024-12-06 10:27:49.840939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.841266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.841289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:43.881 [2024-12-06 10:27:49.841302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:28:43.881 [2024-12-06 10:27:49.841314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.855874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.855932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:43.881 [2024-12-06 10:27:49.855947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.536 ms 00:28:43.881 [2024-12-06 10:27:49.855956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.862365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.862412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:43.881 [2024-12-06 10:27:49.862433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:28:43.881 [2024-12-06 10:27:49.862441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.889760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.889810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:43.881 [2024-12-06 10:27:49.889824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.244 ms 00:28:43.881 [2024-12-06 10:27:49.889832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.906904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.906952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:43.881 [2024-12-06 10:27:49.906966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.021 ms 00:28:43.881 [2024-12-06 10:27:49.906974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.912080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.912133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:43.881 [2024-12-06 10:27:49.912146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.050 ms 00:28:43.881 [2024-12-06 10:27:49.912163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.938296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.938342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:43.881 [2024-12-06 10:27:49.938354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.096 ms 00:28:43.881 [2024-12-06 10:27:49.938361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.881 [2024-12-06 10:27:49.963691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.881 [2024-12-06 10:27:49.963902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:43.881 [2024-12-06 10:27:49.963925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.281 ms 00:28:43.881 [2024-12-06 10:27:49.963932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.882 [2024-12-06 10:27:49.989365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.882 [2024-12-06 10:27:49.989421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:43.882 [2024-12-06 10:27:49.989436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.115 ms 00:28:43.882 [2024-12-06 10:27:49.989460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.882 [2024-12-06 10:27:50.015998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.882 [2024-12-06 10:27:50.016056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:43.882 [2024-12-06 10:27:50.016071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.453 ms 00:28:43.882 [2024-12-06 10:27:50.016080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.882 [2024-12-06 10:27:50.016133] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:43.882 [2024-12-06 10:27:50.016149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:43.882 [2024-12-06 10:27:50.016161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:43.882 [2024-12-06 10:27:50.016170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:43.882 [2024-12-06 10:27:50.016884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.016994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.017002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.017010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.017017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:43.883 [2024-12-06 10:27:50.017034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:43.883 [2024-12-06 10:27:50.017042] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e267339-64bd-4d3b-92f1-986b0822465a 00:28:43.883 [2024-12-06 10:27:50.017051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:43.883 [2024-12-06 10:27:50.017059] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 153024 00:28:43.883 [2024-12-06 10:27:50.017073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 151040 00:28:43.883 [2024-12-06 10:27:50.017081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0131 00:28:43.883 [2024-12-06 10:27:50.017090] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:43.883 [2024-12-06 10:27:50.017106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:43.883 [2024-12-06 10:27:50.017113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:43.883 [2024-12-06 10:27:50.017120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:43.883 [2024-12-06 10:27:50.017127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:43.883 [2024-12-06 10:27:50.017135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.883 [2024-12-06 10:27:50.017143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:43.883 [2024-12-06 10:27:50.017152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:28:43.883 [2024-12-06 10:27:50.017159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.883 [2024-12-06 10:27:50.031805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.883 [2024-12-06 10:27:50.031858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:43.883 [2024-12-06 10:27:50.031872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.607 ms 00:28:43.883 [2024-12-06 10:27:50.031881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:43.883 [2024-12-06 10:27:50.032300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:43.883 [2024-12-06 10:27:50.032311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:43.883 [2024-12-06 10:27:50.032322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:28:43.883 [2024-12-06 10:27:50.032329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.070705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.070925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:44.145 [2024-12-06 10:27:50.070948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.070959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.071034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.071044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:44.145 [2024-12-06 10:27:50.071053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.071062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.071178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.071189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:44.145 [2024-12-06 10:27:50.071199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.071206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.071224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.071232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:44.145 [2024-12-06 10:27:50.071241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.071249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.156413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.156503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:44.145 [2024-12-06 10:27:50.156518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.156527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:44.145 [2024-12-06 10:27:50.225414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:44.145 [2024-12-06 10:27:50.225535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:44.145 [2024-12-06 10:27:50.225622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:44.145 [2024-12-06 10:27:50.225768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:44.145 [2024-12-06 10:27:50.225825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:44.145 [2024-12-06 10:27:50.225898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.225955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:44.145 [2024-12-06 10:27:50.225966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:44.145 [2024-12-06 10:27:50.225976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:44.145 [2024-12-06 10:27:50.225985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:44.145 [2024-12-06 10:27:50.226121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.781 ms, result 0 00:28:45.097 00:28:45.097 00:28:45.097 10:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:46.485 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:46.485 10:27:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:46.746 [2024-12-06 10:27:52.685285] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:28:46.746 [2024-12-06 10:27:52.685405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82193 ] 00:28:46.746 [2024-12-06 10:27:52.847133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.007 [2024-12-06 10:27:52.965496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.269 [2024-12-06 10:27:53.261552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.269 [2024-12-06 10:27:53.261645] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.269 [2024-12-06 10:27:53.423655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.269 [2024-12-06 10:27:53.423879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.269 [2024-12-06 10:27:53.423913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:47.269 [2024-12-06 10:27:53.423926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.269 [2024-12-06 10:27:53.424019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.269 [2024-12-06 10:27:53.424038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.269 [2024-12-06 10:27:53.424052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:47.269 [2024-12-06 10:27:53.424063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.269 [2024-12-06 10:27:53.424099] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.269 [2024-12-06 10:27:53.425071] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.269 [2024-12-06 10:27:53.425117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.269 [2024-12-06 10:27:53.425131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.269 [2024-12-06 10:27:53.425145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.027 ms 00:28:47.269 [2024-12-06 10:27:53.425156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.269 [2024-12-06 10:27:53.427026] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.532 [2024-12-06 10:27:53.441562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.441614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.532 [2024-12-06 10:27:53.441634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.538 ms 00:28:47.532 [2024-12-06 10:27:53.441647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.441754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.441771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.532 [2024-12-06 10:27:53.441787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:28:47.532 [2024-12-06 10:27:53.441801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.450365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.450418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.532 [2024-12-06 10:27:53.450435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.454 ms 00:28:47.532 [2024-12-06 10:27:53.450474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.450587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.450601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.532 [2024-12-06 10:27:53.450615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:47.532 [2024-12-06 10:27:53.450628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.450689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.450705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.532 [2024-12-06 10:27:53.450718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:47.532 [2024-12-06 10:27:53.450731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.450770] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.532 [2024-12-06 10:27:53.454966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.455014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.532 [2024-12-06 10:27:53.455036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.204 ms 00:28:47.532 [2024-12-06 10:27:53.455047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.455104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.455118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.532 [2024-12-06 10:27:53.455132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:47.532 [2024-12-06 10:27:53.455144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.455217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.532 [2024-12-06 10:27:53.455252] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.532 [2024-12-06 10:27:53.455307] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.532 [2024-12-06 10:27:53.455336] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:47.532 [2024-12-06 10:27:53.455516] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.532 [2024-12-06 10:27:53.455535] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.532 [2024-12-06 10:27:53.455554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:47.532 [2024-12-06 10:27:53.455570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.532 [2024-12-06 10:27:53.455586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.532 [2024-12-06 10:27:53.455600] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.532 [2024-12-06 10:27:53.455612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.532 [2024-12-06 10:27:53.455629] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.532 [2024-12-06 10:27:53.455642] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.532 [2024-12-06 10:27:53.455655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.455666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.532 [2024-12-06 10:27:53.455679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:28:47.532 [2024-12-06 10:27:53.455692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.455814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.532 [2024-12-06 10:27:53.455829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.532 [2024-12-06 10:27:53.455842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:28:47.532 [2024-12-06 10:27:53.455853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.532 [2024-12-06 10:27:53.456002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.532 [2024-12-06 10:27:53.456030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.532 [2024-12-06 10:27:53.456045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.532 [2024-12-06 10:27:53.456058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.532 [2024-12-06 10:27:53.456070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.532 [2024-12-06 10:27:53.456083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.532 [2024-12-06 10:27:53.456095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.532 [2024-12-06 10:27:53.456106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.532 [2024-12-06 10:27:53.456118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.533 [2024-12-06 10:27:53.456140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.533 [2024-12-06 10:27:53.456152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.533 [2024-12-06 10:27:53.456164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.533 [2024-12-06 10:27:53.456200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.533 [2024-12-06 10:27:53.456213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.533 [2024-12-06 10:27:53.456223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.533 [2024-12-06 10:27:53.456246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.533 [2024-12-06 10:27:53.456281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.533 [2024-12-06 10:27:53.456315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.533 [2024-12-06 10:27:53.456349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.533 [2024-12-06 10:27:53.456383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.533 [2024-12-06 10:27:53.456416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.533 [2024-12-06 10:27:53.456438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.533 [2024-12-06 10:27:53.456470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.533 [2024-12-06 10:27:53.456483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.533 [2024-12-06 10:27:53.456497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.533 [2024-12-06 10:27:53.456509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.533 [2024-12-06 10:27:53.456524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.533 [2024-12-06 10:27:53.456549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.533 [2024-12-06 10:27:53.456561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.533 [2024-12-06 10:27:53.456586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.533 [2024-12-06 10:27:53.456599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.533 [2024-12-06 10:27:53.456626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.533 [2024-12-06 10:27:53.456639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.533 [2024-12-06 10:27:53.456652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.533 [2024-12-06 10:27:53.456665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.533 [2024-12-06 10:27:53.456677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.533 [2024-12-06 10:27:53.456689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.533 [2024-12-06 10:27:53.456704] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.533 [2024-12-06 10:27:53.456720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.533 [2024-12-06 10:27:53.456753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.533 [2024-12-06 10:27:53.456766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.533 [2024-12-06 10:27:53.456779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.533 [2024-12-06 10:27:53.456791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.533 [2024-12-06 10:27:53.456804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.533 [2024-12-06 10:27:53.456817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.533 [2024-12-06 10:27:53.456830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.533 [2024-12-06 10:27:53.456843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.533 [2024-12-06 10:27:53.456856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.533 [2024-12-06 10:27:53.456922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.533 [2024-12-06 10:27:53.456936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.533 [2024-12-06 10:27:53.456965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.533 [2024-12-06 10:27:53.456978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.533 [2024-12-06 10:27:53.456991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.533 [2024-12-06 10:27:53.457004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.457017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.533 [2024-12-06 10:27:53.457031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:28:47.533 [2024-12-06 10:27:53.457044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.490185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.490383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.533 [2024-12-06 10:27:53.490492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.062 ms 00:28:47.533 [2024-12-06 10:27:53.490540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.490685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.490730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:47.533 [2024-12-06 10:27:53.490765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:28:47.533 [2024-12-06 10:27:53.490799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.537048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.537249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:47.533 [2024-12-06 10:27:53.537353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.137 ms 00:28:47.533 [2024-12-06 10:27:53.537395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.537497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.537541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:47.533 [2024-12-06 10:27:53.537585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:47.533 [2024-12-06 10:27:53.537618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.538287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.538437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:47.533 [2024-12-06 10:27:53.538560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:28:47.533 [2024-12-06 10:27:53.538661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.533 [2024-12-06 10:27:53.538892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.533 [2024-12-06 10:27:53.538987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:47.534 [2024-12-06 10:27:53.539037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:28:47.534 [2024-12-06 10:27:53.539070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.555214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.555393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:47.534 [2024-12-06 10:27:53.555417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.006 ms 00:28:47.534 [2024-12-06 10:27:53.555429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.570043] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:47.534 [2024-12-06 10:27:53.570232] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:47.534 [2024-12-06 10:27:53.570334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.570369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:47.534 [2024-12-06 10:27:53.570406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.732 ms 00:28:47.534 [2024-12-06 10:27:53.570442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.596908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.597099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:47.534 [2024-12-06 10:27:53.597186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:28:47.534 [2024-12-06 10:27:53.597225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.610337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.610536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:47.534 [2024-12-06 10:27:53.610632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.016 ms 00:28:47.534 [2024-12-06 10:27:53.610673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.623327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.623510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:47.534 [2024-12-06 10:27:53.623597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.580 ms 00:28:47.534 [2024-12-06 10:27:53.623636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.624359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.624399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:47.534 [2024-12-06 10:27:53.624419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:28:47.534 [2024-12-06 10:27:53.624431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.534 [2024-12-06 10:27:53.691224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.534 [2024-12-06 10:27:53.691488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:47.534 [2024-12-06 10:27:53.691608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.628 ms 00:28:47.534 [2024-12-06 10:27:53.691647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.703138] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:47.796 [2024-12-06 10:27:53.706469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.706632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:47.796 [2024-12-06 10:27:53.706709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.744 ms 00:28:47.796 [2024-12-06 10:27:53.706745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.706881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.706935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:47.796 [2024-12-06 10:27:53.707132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:47.796 [2024-12-06 10:27:53.707152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.708059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.708104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:47.796 [2024-12-06 10:27:53.708120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:28:47.796 [2024-12-06 10:27:53.708133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.708195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.708211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:47.796 [2024-12-06 10:27:53.708226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:28:47.796 [2024-12-06 10:27:53.708241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.708302] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:47.796 [2024-12-06 10:27:53.708320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.708337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:47.796 [2024-12-06 10:27:53.708352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:47.796 [2024-12-06 10:27:53.708367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.734473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.734653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:47.796 [2024-12-06 10:27:53.734769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.073 ms 00:28:47.796 [2024-12-06 10:27:53.734810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.734937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.796 [2024-12-06 10:27:53.735040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:47.796 [2024-12-06 10:27:53.735078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:47.796 [2024-12-06 10:27:53.735111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.796 [2024-12-06 10:27:53.737191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.038 ms, result 0 00:28:49.181  [2024-12-06T10:27:55.917Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-06T10:27:57.295Z] Copying: 23/1024 [MB] (11 MBps) [2024-12-06T10:27:58.234Z] Copying: 53/1024 [MB] (29 MBps) [2024-12-06T10:27:59.174Z] Copying: 71/1024 [MB] (17 MBps) [2024-12-06T10:28:00.113Z] Copying: 90/1024 [MB] (19 MBps) [2024-12-06T10:28:01.078Z] Copying: 109/1024 [MB] (19 MBps) [2024-12-06T10:28:02.023Z] Copying: 122/1024 [MB] (13 MBps) [2024-12-06T10:28:02.964Z] Copying: 140/1024 [MB] (17 MBps) [2024-12-06T10:28:04.420Z] Copying: 154/1024 [MB] (13 MBps) [2024-12-06T10:28:04.996Z] Copying: 175/1024 [MB] (20 MBps) [2024-12-06T10:28:05.939Z] Copying: 190/1024 [MB] (15 MBps) [2024-12-06T10:28:07.323Z] Copying: 207/1024 [MB] (16 MBps) [2024-12-06T10:28:08.268Z] Copying: 222/1024 [MB] (14 MBps) [2024-12-06T10:28:09.211Z] Copying: 232/1024 [MB] (10 MBps) [2024-12-06T10:28:10.155Z] Copying: 248/1024 [MB] (15 MBps) [2024-12-06T10:28:11.100Z] Copying: 264/1024 [MB] (16 MBps) [2024-12-06T10:28:12.044Z] Copying: 284/1024 [MB] (19 MBps) [2024-12-06T10:28:12.987Z] Copying: 302/1024 [MB] (18 MBps) [2024-12-06T10:28:13.933Z] Copying: 322/1024 [MB] (19 MBps) [2024-12-06T10:28:15.320Z] Copying: 346/1024 [MB] (23 MBps) [2024-12-06T10:28:16.259Z] Copying: 368/1024 [MB] (21 MBps) [2024-12-06T10:28:17.201Z] Copying: 384/1024 [MB] (16 MBps) [2024-12-06T10:28:18.147Z] Copying: 402/1024 [MB] (18 MBps) [2024-12-06T10:28:19.090Z] Copying: 422/1024 [MB] (19 MBps) [2024-12-06T10:28:20.035Z] Copying: 441/1024 [MB] (18 MBps) [2024-12-06T10:28:20.981Z] Copying: 461/1024 [MB] (20 MBps) [2024-12-06T10:28:21.924Z] Copying: 479/1024 [MB] (17 MBps) [2024-12-06T10:28:23.304Z] Copying: 490/1024 [MB] (10 MBps) [2024-12-06T10:28:23.920Z] Copying: 500/1024 [MB] (10 MBps) [2024-12-06T10:28:25.299Z] Copying: 512/1024 [MB] (11 MBps) [2024-12-06T10:28:26.239Z] Copying: 522/1024 [MB] (10 MBps) [2024-12-06T10:28:27.180Z] Copying: 535/1024 [MB] (12 MBps) [2024-12-06T10:28:28.121Z] Copying: 549/1024 [MB] (14 MBps) [2024-12-06T10:28:29.070Z] Copying: 567/1024 [MB] (17 MBps) [2024-12-06T10:28:30.068Z] Copying: 578/1024 [MB] (11 MBps) [2024-12-06T10:28:31.011Z] Copying: 595/1024 [MB] (16 MBps) [2024-12-06T10:28:31.952Z] Copying: 617/1024 [MB] (21 MBps) [2024-12-06T10:28:33.339Z] Copying: 637/1024 [MB] (20 MBps) [2024-12-06T10:28:34.286Z] Copying: 656/1024 [MB] (19 MBps) [2024-12-06T10:28:35.231Z] Copying: 671/1024 [MB] (15 MBps) [2024-12-06T10:28:36.169Z] Copying: 689/1024 [MB] (17 MBps) [2024-12-06T10:28:37.112Z] Copying: 711/1024 [MB] (22 MBps) [2024-12-06T10:28:38.054Z] Copying: 722/1024 [MB] (10 MBps) [2024-12-06T10:28:38.998Z] Copying: 736/1024 [MB] (14 MBps) [2024-12-06T10:28:39.944Z] Copying: 751/1024 [MB] (14 MBps) [2024-12-06T10:28:41.333Z] Copying: 770/1024 [MB] (19 MBps) [2024-12-06T10:28:42.279Z] Copying: 799600/1048576 [kB] (10224 kBps) [2024-12-06T10:28:43.225Z] Copying: 791/1024 [MB] (10 MBps) [2024-12-06T10:28:44.169Z] Copying: 801/1024 [MB] (10 MBps) [2024-12-06T10:28:45.112Z] Copying: 819/1024 [MB] (18 MBps) [2024-12-06T10:28:46.103Z] Copying: 836/1024 [MB] (16 MBps) [2024-12-06T10:28:47.047Z] Copying: 846/1024 [MB] (10 MBps) [2024-12-06T10:28:47.989Z] Copying: 856/1024 [MB] (10 MBps) [2024-12-06T10:28:48.935Z] Copying: 877/1024 [MB] (20 MBps) [2024-12-06T10:28:50.319Z] Copying: 893/1024 [MB] (16 MBps) [2024-12-06T10:28:51.261Z] Copying: 907/1024 [MB] (13 MBps) [2024-12-06T10:28:52.206Z] Copying: 921/1024 [MB] (14 MBps) [2024-12-06T10:28:53.151Z] Copying: 940/1024 [MB] (19 MBps) [2024-12-06T10:28:54.096Z] Copying: 953/1024 [MB] (13 MBps) [2024-12-06T10:28:55.040Z] Copying: 968/1024 [MB] (15 MBps) [2024-12-06T10:28:55.997Z] Copying: 989/1024 [MB] (20 MBps) [2024-12-06T10:28:56.942Z] Copying: 1010/1024 [MB] (20 MBps) [2024-12-06T10:28:57.516Z] Copying: 1020/1024 [MB] (10 MBps) [2024-12-06T10:28:57.778Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-06 10:28:57.666999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.667411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:51.611 [2024-12-06 10:28:57.667442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:51.611 [2024-12-06 10:28:57.667470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.667510] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:51.611 [2024-12-06 10:28:57.670910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.670959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:51.611 [2024-12-06 10:28:57.670972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:29:51.611 [2024-12-06 10:28:57.670983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.671265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.671277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:51.611 [2024-12-06 10:28:57.671288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:29:51.611 [2024-12-06 10:28:57.671298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.675499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.675523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:51.611 [2024-12-06 10:28:57.675534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:29:51.611 [2024-12-06 10:28:57.675548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.682622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.682657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:51.611 [2024-12-06 10:28:57.682667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.050 ms 00:29:51.611 [2024-12-06 10:28:57.682675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.710333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.710549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:51.611 [2024-12-06 10:28:57.710573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.587 ms 00:29:51.611 [2024-12-06 10:28:57.710582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.727816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.727858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:51.611 [2024-12-06 10:28:57.727870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.189 ms 00:29:51.611 [2024-12-06 10:28:57.727879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.732558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.732704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:51.611 [2024-12-06 10:28:57.732958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.611 ms 00:29:51.611 [2024-12-06 10:28:57.732983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.611 [2024-12-06 10:28:57.759762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.611 [2024-12-06 10:28:57.759949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:51.611 [2024-12-06 10:28:57.760018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.747 ms 00:29:51.611 [2024-12-06 10:28:57.760043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.875 [2024-12-06 10:28:57.786320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.875 [2024-12-06 10:28:57.786525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:51.875 [2024-12-06 10:28:57.786726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.226 ms 00:29:51.875 [2024-12-06 10:28:57.786770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.875 [2024-12-06 10:28:57.812853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.875 [2024-12-06 10:28:57.813023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:51.875 [2024-12-06 10:28:57.813110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.022 ms 00:29:51.875 [2024-12-06 10:28:57.813133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.875 [2024-12-06 10:28:57.838847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.875 [2024-12-06 10:28:57.839015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:51.875 [2024-12-06 10:28:57.839075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.621 ms 00:29:51.875 [2024-12-06 10:28:57.839098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.875 [2024-12-06 10:28:57.839145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:51.875 [2024-12-06 10:28:57.839183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:51.875 [2024-12-06 10:28:57.839220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:51.875 [2024-12-06 10:28:57.839250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.839989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:51.875 [2024-12-06 10:28:57.840486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.840991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:51.876 [2024-12-06 10:28:57.841491] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:51.876 [2024-12-06 10:28:57.841501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6e267339-64bd-4d3b-92f1-986b0822465a 00:29:51.876 [2024-12-06 10:28:57.841509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:51.876 [2024-12-06 10:28:57.841517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:51.876 [2024-12-06 10:28:57.841525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:51.876 [2024-12-06 10:28:57.841534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:51.876 [2024-12-06 10:28:57.841552] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:51.876 [2024-12-06 10:28:57.841560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:51.876 [2024-12-06 10:28:57.841568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:51.876 [2024-12-06 10:28:57.841574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:51.876 [2024-12-06 10:28:57.841582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:51.876 [2024-12-06 10:28:57.841591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.876 [2024-12-06 10:28:57.841604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:51.876 [2024-12-06 10:28:57.841614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.447 ms 00:29:51.876 [2024-12-06 10:28:57.841626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.876 [2024-12-06 10:28:57.856110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.876 [2024-12-06 10:28:57.856146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:51.876 [2024-12-06 10:28:57.856158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:29:51.876 [2024-12-06 10:28:57.856166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.876 [2024-12-06 10:28:57.856626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:51.876 [2024-12-06 10:28:57.856647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:51.876 [2024-12-06 10:28:57.856657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:29:51.876 [2024-12-06 10:28:57.856666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.876 [2024-12-06 10:28:57.893714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.876 [2024-12-06 10:28:57.893753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:51.876 [2024-12-06 10:28:57.893766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.876 [2024-12-06 10:28:57.893775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.876 [2024-12-06 10:28:57.893849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.876 [2024-12-06 10:28:57.893866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:51.876 [2024-12-06 10:28:57.893876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.876 [2024-12-06 10:28:57.893884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.876 [2024-12-06 10:28:57.893980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.876 [2024-12-06 10:28:57.893991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:51.877 [2024-12-06 10:28:57.894001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.877 [2024-12-06 10:28:57.894011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.877 [2024-12-06 10:28:57.894028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.877 [2024-12-06 10:28:57.894038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:51.877 [2024-12-06 10:28:57.894052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.877 [2024-12-06 10:28:57.894061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:51.877 [2024-12-06 10:28:57.979271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:51.877 [2024-12-06 10:28:57.979315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:51.877 [2024-12-06 10:28:57.979327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:51.877 [2024-12-06 10:28:57.979336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.048520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.048578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:52.138 [2024-12-06 10:28:58.048590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.048598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.048664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.048675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:52.138 [2024-12-06 10:28:58.048684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.048692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.048749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.048759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:52.138 [2024-12-06 10:28:58.048768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.048780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.048876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.048886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:52.138 [2024-12-06 10:28:58.048895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.048904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.048937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.048947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:52.138 [2024-12-06 10:28:58.048955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.048963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.049011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.049021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:52.138 [2024-12-06 10:28:58.049029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.049038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.049086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:52.138 [2024-12-06 10:28:58.049097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:52.138 [2024-12-06 10:28:58.049106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:52.138 [2024-12-06 10:28:58.049118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:52.138 [2024-12-06 10:28:58.049257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.225 ms, result 0 00:29:52.710 00:29:52.710 00:29:52.710 10:28:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:55.254 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:55.254 10:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:55.254 10:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:55.254 10:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:55.254 10:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:55.254 10:29:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:55.254 Process with pid 80386 is not found 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80386 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80386 ']' 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80386 00:29:55.254 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80386) - No such process 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80386 is not found' 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:55.254 Remove shared memory files 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:55.254 ************************************ 00:29:55.254 END TEST ftl_dirty_shutdown 00:29:55.254 ************************************ 00:29:55.254 00:29:55.254 real 3m59.371s 00:29:55.254 user 4m23.729s 00:29:55.254 sys 0m27.564s 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:55.254 10:29:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.513 10:29:01 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:55.513 10:29:01 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:55.513 10:29:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:55.513 10:29:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:55.513 ************************************ 00:29:55.513 START TEST ftl_upgrade_shutdown 00:29:55.513 ************************************ 00:29:55.513 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:55.513 * Looking for test storage... 00:29:55.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:55.513 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:55.513 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:55.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.514 --rc genhtml_branch_coverage=1 00:29:55.514 --rc genhtml_function_coverage=1 00:29:55.514 --rc genhtml_legend=1 00:29:55.514 --rc geninfo_all_blocks=1 00:29:55.514 --rc geninfo_unexecuted_blocks=1 00:29:55.514 00:29:55.514 ' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:55.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.514 --rc genhtml_branch_coverage=1 00:29:55.514 --rc genhtml_function_coverage=1 00:29:55.514 --rc genhtml_legend=1 00:29:55.514 --rc geninfo_all_blocks=1 00:29:55.514 --rc geninfo_unexecuted_blocks=1 00:29:55.514 00:29:55.514 ' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:55.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.514 --rc genhtml_branch_coverage=1 00:29:55.514 --rc genhtml_function_coverage=1 00:29:55.514 --rc genhtml_legend=1 00:29:55.514 --rc geninfo_all_blocks=1 00:29:55.514 --rc geninfo_unexecuted_blocks=1 00:29:55.514 00:29:55.514 ' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:55.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:55.514 --rc genhtml_branch_coverage=1 00:29:55.514 --rc genhtml_function_coverage=1 00:29:55.514 --rc genhtml_legend=1 00:29:55.514 --rc geninfo_all_blocks=1 00:29:55.514 --rc geninfo_unexecuted_blocks=1 00:29:55.514 00:29:55.514 ' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82956 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:55.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82956 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82956 ']' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.514 10:29:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:55.775 [2024-12-06 10:29:01.701549] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:29:55.775 [2024-12-06 10:29:01.701652] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82956 ] 00:29:55.775 [2024-12-06 10:29:01.859463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.037 [2024-12-06 10:29:01.977817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:56.982 10:29:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:56.982 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:57.247 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:57.247 { 00:29:57.247 "name": "basen1", 00:29:57.247 "aliases": [ 00:29:57.247 "30c64532-3375-4893-b325-54f3f6c7ae8a" 00:29:57.247 ], 00:29:57.247 "product_name": "NVMe disk", 00:29:57.247 "block_size": 4096, 00:29:57.247 "num_blocks": 1310720, 00:29:57.247 "uuid": "30c64532-3375-4893-b325-54f3f6c7ae8a", 00:29:57.247 "numa_id": -1, 00:29:57.247 "assigned_rate_limits": { 00:29:57.247 "rw_ios_per_sec": 0, 00:29:57.247 "rw_mbytes_per_sec": 0, 00:29:57.247 "r_mbytes_per_sec": 0, 00:29:57.247 "w_mbytes_per_sec": 0 00:29:57.247 }, 00:29:57.247 "claimed": true, 00:29:57.247 "claim_type": "read_many_write_one", 00:29:57.247 "zoned": false, 00:29:57.247 "supported_io_types": { 00:29:57.247 "read": true, 00:29:57.247 "write": true, 00:29:57.247 "unmap": true, 00:29:57.247 "flush": true, 00:29:57.247 "reset": true, 00:29:57.247 "nvme_admin": true, 00:29:57.247 "nvme_io": true, 00:29:57.247 "nvme_io_md": false, 00:29:57.247 "write_zeroes": true, 00:29:57.247 "zcopy": false, 00:29:57.247 "get_zone_info": false, 00:29:57.247 "zone_management": false, 00:29:57.247 "zone_append": false, 00:29:57.247 "compare": true, 00:29:57.247 "compare_and_write": false, 00:29:57.247 "abort": true, 00:29:57.247 "seek_hole": false, 00:29:57.247 "seek_data": false, 00:29:57.247 "copy": true, 00:29:57.247 "nvme_iov_md": false 00:29:57.247 }, 00:29:57.247 "driver_specific": { 00:29:57.247 "nvme": [ 00:29:57.247 { 00:29:57.247 "pci_address": "0000:00:11.0", 00:29:57.247 "trid": { 00:29:57.247 "trtype": "PCIe", 00:29:57.247 "traddr": "0000:00:11.0" 00:29:57.247 }, 00:29:57.247 "ctrlr_data": { 00:29:57.247 "cntlid": 0, 00:29:57.247 "vendor_id": "0x1b36", 00:29:57.247 "model_number": "QEMU NVMe Ctrl", 00:29:57.247 "serial_number": "12341", 00:29:57.247 "firmware_revision": "8.0.0", 00:29:57.247 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:57.247 "oacs": { 00:29:57.247 "security": 0, 00:29:57.247 "format": 1, 00:29:57.247 "firmware": 0, 00:29:57.247 "ns_manage": 1 00:29:57.247 }, 00:29:57.247 "multi_ctrlr": false, 00:29:57.247 "ana_reporting": false 00:29:57.247 }, 00:29:57.247 "vs": { 00:29:57.247 "nvme_version": "1.4" 00:29:57.247 }, 00:29:57.247 "ns_data": { 00:29:57.247 "id": 1, 00:29:57.247 "can_share": false 00:29:57.247 } 00:29:57.247 } 00:29:57.247 ], 00:29:57.247 "mp_policy": "active_passive" 00:29:57.247 } 00:29:57.247 } 00:29:57.247 ]' 00:29:57.247 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:57.247 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:57.247 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:57.248 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:57.554 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=7fcb2fa2-2794-40b7-8d4a-5999d968fa6a 00:29:57.554 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:57.554 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7fcb2fa2-2794-40b7-8d4a-5999d968fa6a 00:29:57.837 10:29:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:58.095 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=6dee0fd4-4a14-43c5-b9cb-341deafd8f94 00:29:58.095 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 6dee0fd4-4a14-43c5-b9cb-341deafd8f94 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 ]] 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 5120 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 00:29:58.353 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:58.353 { 00:29:58.353 "name": "635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5", 00:29:58.353 "aliases": [ 00:29:58.353 "lvs/basen1p0" 00:29:58.353 ], 00:29:58.353 "product_name": "Logical Volume", 00:29:58.353 "block_size": 4096, 00:29:58.353 "num_blocks": 5242880, 00:29:58.353 "uuid": "635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5", 00:29:58.353 "assigned_rate_limits": { 00:29:58.353 "rw_ios_per_sec": 0, 00:29:58.353 "rw_mbytes_per_sec": 0, 00:29:58.353 "r_mbytes_per_sec": 0, 00:29:58.353 "w_mbytes_per_sec": 0 00:29:58.353 }, 00:29:58.353 "claimed": false, 00:29:58.353 "zoned": false, 00:29:58.353 "supported_io_types": { 00:29:58.353 "read": true, 00:29:58.353 "write": true, 00:29:58.353 "unmap": true, 00:29:58.353 "flush": false, 00:29:58.353 "reset": true, 00:29:58.353 "nvme_admin": false, 00:29:58.353 "nvme_io": false, 00:29:58.353 "nvme_io_md": false, 00:29:58.353 "write_zeroes": true, 00:29:58.353 "zcopy": false, 00:29:58.353 "get_zone_info": false, 00:29:58.353 "zone_management": false, 00:29:58.353 "zone_append": false, 00:29:58.353 "compare": false, 00:29:58.353 "compare_and_write": false, 00:29:58.353 "abort": false, 00:29:58.353 "seek_hole": true, 00:29:58.353 "seek_data": true, 00:29:58.353 "copy": false, 00:29:58.353 "nvme_iov_md": false 00:29:58.353 }, 00:29:58.353 "driver_specific": { 00:29:58.353 "lvol": { 00:29:58.353 "lvol_store_uuid": "6dee0fd4-4a14-43c5-b9cb-341deafd8f94", 00:29:58.353 "base_bdev": "basen1", 00:29:58.354 "thin_provision": true, 00:29:58.354 "num_allocated_clusters": 0, 00:29:58.354 "snapshot": false, 00:29:58.354 "clone": false, 00:29:58.354 "esnap_clone": false 00:29:58.354 } 00:29:58.354 } 00:29:58.354 } 00:29:58.354 ]' 00:29:58.354 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:58.612 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:58.871 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:58.871 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:58.871 10:29:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:58.871 10:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:58.871 10:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:58.871 10:29:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 635ae9f9-a59f-4c2f-883d-ce6d4c82b7f5 -c cachen1p0 --l2p_dram_limit 2 00:29:59.134 [2024-12-06 10:29:05.202318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.202361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:59.134 [2024-12-06 10:29:05.202375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:59.134 [2024-12-06 10:29:05.202383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.202434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.202443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:59.134 [2024-12-06 10:29:05.202465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:59.134 [2024-12-06 10:29:05.202472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.202489] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:59.134 [2024-12-06 10:29:05.202991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:59.134 [2024-12-06 10:29:05.203012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.203018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:59.134 [2024-12-06 10:29:05.203029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:29:59.134 [2024-12-06 10:29:05.203035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.203085] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 471f812c-23f7-41c3-b211-e349b3c88310 00:29:59.134 [2024-12-06 10:29:05.204392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.204412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:59.134 [2024-12-06 10:29:05.204419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:59.134 [2024-12-06 10:29:05.204427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.211403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.211431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:59.134 [2024-12-06 10:29:05.211440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.932 ms 00:29:59.134 [2024-12-06 10:29:05.211460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.211493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.211502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:59.134 [2024-12-06 10:29:05.211508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:59.134 [2024-12-06 10:29:05.211517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.211554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.211564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:59.134 [2024-12-06 10:29:05.211573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:59.134 [2024-12-06 10:29:05.211580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.211596] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:59.134 [2024-12-06 10:29:05.214905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.214929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:59.134 [2024-12-06 10:29:05.214939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.311 ms 00:29:59.134 [2024-12-06 10:29:05.214945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.214969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.214976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:59.134 [2024-12-06 10:29:05.214984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:59.134 [2024-12-06 10:29:05.214990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.215010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:59.134 [2024-12-06 10:29:05.215124] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:59.134 [2024-12-06 10:29:05.215138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:59.134 [2024-12-06 10:29:05.215148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:59.134 [2024-12-06 10:29:05.215157] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:59.134 [2024-12-06 10:29:05.215164] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:59.134 [2024-12-06 10:29:05.215172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:59.134 [2024-12-06 10:29:05.215179] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:59.134 [2024-12-06 10:29:05.215188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:59.134 [2024-12-06 10:29:05.215194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:59.134 [2024-12-06 10:29:05.215201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.215207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:59.134 [2024-12-06 10:29:05.215214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:29:59.134 [2024-12-06 10:29:05.215220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.215287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.134 [2024-12-06 10:29:05.215299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:59.134 [2024-12-06 10:29:05.215307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:59.134 [2024-12-06 10:29:05.215314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.134 [2024-12-06 10:29:05.215395] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:59.134 [2024-12-06 10:29:05.215403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:59.134 [2024-12-06 10:29:05.215411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:59.134 [2024-12-06 10:29:05.215418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:59.134 [2024-12-06 10:29:05.215430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:59.134 [2024-12-06 10:29:05.215443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:59.134 [2024-12-06 10:29:05.215653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:59.134 [2024-12-06 10:29:05.215671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:59.134 [2024-12-06 10:29:05.215702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:59.134 [2024-12-06 10:29:05.215719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:59.134 [2024-12-06 10:29:05.215748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:59.134 [2024-12-06 10:29:05.215804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:59.134 [2024-12-06 10:29:05.215840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:59.134 [2024-12-06 10:29:05.215856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.134 [2024-12-06 10:29:05.215872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:59.134 [2024-12-06 10:29:05.215888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:59.134 [2024-12-06 10:29:05.215902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:59.134 [2024-12-06 10:29:05.215918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:59.134 [2024-12-06 10:29:05.215932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:59.135 [2024-12-06 10:29:05.215974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:59.135 [2024-12-06 10:29:05.215990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:59.135 [2024-12-06 10:29:05.216006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:59.135 [2024-12-06 10:29:05.216020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:59.135 [2024-12-06 10:29:05.216035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:59.135 [2024-12-06 10:29:05.216048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:59.135 [2024-12-06 10:29:05.216064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:59.135 [2024-12-06 10:29:05.216078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:59.135 [2024-12-06 10:29:05.216095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:59.135 [2024-12-06 10:29:05.216140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:59.135 [2024-12-06 10:29:05.216173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:59.135 [2024-12-06 10:29:05.216198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:59.135 [2024-12-06 10:29:05.216230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:59.135 [2024-12-06 10:29:05.216273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:59.135 [2024-12-06 10:29:05.216353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216361] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:59.135 [2024-12-06 10:29:05.216370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:59.135 [2024-12-06 10:29:05.216376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:59.135 [2024-12-06 10:29:05.216383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:59.135 [2024-12-06 10:29:05.216389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:59.135 [2024-12-06 10:29:05.216398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:59.135 [2024-12-06 10:29:05.216404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:59.135 [2024-12-06 10:29:05.216411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:59.135 [2024-12-06 10:29:05.216417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:59.135 [2024-12-06 10:29:05.216424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:59.135 [2024-12-06 10:29:05.216432] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:59.135 [2024-12-06 10:29:05.216443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:59.135 [2024-12-06 10:29:05.216469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:59.135 [2024-12-06 10:29:05.216487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:59.135 [2024-12-06 10:29:05.216494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:59.135 [2024-12-06 10:29:05.216500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:59.135 [2024-12-06 10:29:05.216507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:59.135 [2024-12-06 10:29:05.216553] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:59.135 [2024-12-06 10:29:05.216561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:59.135 [2024-12-06 10:29:05.216575] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:59.135 [2024-12-06 10:29:05.216581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:59.135 [2024-12-06 10:29:05.216588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:59.135 [2024-12-06 10:29:05.216595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.135 [2024-12-06 10:29:05.216603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:59.135 [2024-12-06 10:29:05.216610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.260 ms 00:29:59.135 [2024-12-06 10:29:05.216617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.135 [2024-12-06 10:29:05.216665] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:59.135 [2024-12-06 10:29:05.216676] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:03.329 [2024-12-06 10:29:09.331617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.331671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:03.329 [2024-12-06 10:29:09.331685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4114.937 ms 00:30:03.329 [2024-12-06 10:29:09.331695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.355222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.355262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:03.329 [2024-12-06 10:29:09.355274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.345 ms 00:30:03.329 [2024-12-06 10:29:09.355283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.355343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.355352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:03.329 [2024-12-06 10:29:09.355361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:03.329 [2024-12-06 10:29:09.355371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.381879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.382001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:03.329 [2024-12-06 10:29:09.382015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.465 ms 00:30:03.329 [2024-12-06 10:29:09.382024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.382054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.382063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:03.329 [2024-12-06 10:29:09.382070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:03.329 [2024-12-06 10:29:09.382077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.382489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.382506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:03.329 [2024-12-06 10:29:09.382520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.376 ms 00:30:03.329 [2024-12-06 10:29:09.382528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.382560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.382572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:03.329 [2024-12-06 10:29:09.382578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:03.329 [2024-12-06 10:29:09.382587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.395523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.395549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:03.329 [2024-12-06 10:29:09.395558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.921 ms 00:30:03.329 [2024-12-06 10:29:09.395566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.415707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:03.329 [2024-12-06 10:29:09.416994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.417031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:03.329 [2024-12-06 10:29:09.417048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.364 ms 00:30:03.329 [2024-12-06 10:29:09.417059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.447503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.447543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:03.329 [2024-12-06 10:29:09.447558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.399 ms 00:30:03.329 [2024-12-06 10:29:09.447566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.447658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.447669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:03.329 [2024-12-06 10:29:09.447681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:30:03.329 [2024-12-06 10:29:09.447689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.329 [2024-12-06 10:29:09.471170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.329 [2024-12-06 10:29:09.471315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:03.329 [2024-12-06 10:29:09.471337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.432 ms 00:30:03.329 [2024-12-06 10:29:09.471348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.494921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.494960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:03.589 [2024-12-06 10:29:09.494975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.305 ms 00:30:03.589 [2024-12-06 10:29:09.494982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.495556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.495569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:03.589 [2024-12-06 10:29:09.495584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:30:03.589 [2024-12-06 10:29:09.495592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.572253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.572287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:03.589 [2024-12-06 10:29:09.572304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.626 ms 00:30:03.589 [2024-12-06 10:29:09.572313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.597395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.597431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:03.589 [2024-12-06 10:29:09.597455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.010 ms 00:30:03.589 [2024-12-06 10:29:09.597464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.621312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.621344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:03.589 [2024-12-06 10:29:09.621357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.809 ms 00:30:03.589 [2024-12-06 10:29:09.621364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.645564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.645596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:03.589 [2024-12-06 10:29:09.645609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.162 ms 00:30:03.589 [2024-12-06 10:29:09.645617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.589 [2024-12-06 10:29:09.645659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.589 [2024-12-06 10:29:09.645668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:03.589 [2024-12-06 10:29:09.645682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:03.590 [2024-12-06 10:29:09.645689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.590 [2024-12-06 10:29:09.645768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:03.590 [2024-12-06 10:29:09.645782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:03.590 [2024-12-06 10:29:09.645793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:03.590 [2024-12-06 10:29:09.645801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:03.590 [2024-12-06 10:29:09.646786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4443.980 ms, result 0 00:30:03.590 { 00:30:03.590 "name": "ftl", 00:30:03.590 "uuid": "471f812c-23f7-41c3-b211-e349b3c88310" 00:30:03.590 } 00:30:03.590 10:29:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:03.851 [2024-12-06 10:29:09.862093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.851 10:29:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:04.110 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:04.370 [2024-12-06 10:29:10.290626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:04.370 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:04.370 [2024-12-06 10:29:10.504546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:04.370 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:04.939 Fill FTL, iteration 1 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83087 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83087 /var/tmp/spdk.tgt.sock 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83087 ']' 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:04.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.939 10:29:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:04.939 [2024-12-06 10:29:10.956833] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:04.939 [2024-12-06 10:29:10.957230] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83087 ] 00:30:05.200 [2024-12-06 10:29:11.123048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.200 [2024-12-06 10:29:11.253377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.765 10:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.765 10:29:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:05.765 10:29:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:06.023 ftln1 00:30:06.023 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:06.023 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83087 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83087 ']' 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83087 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83087 00:30:06.281 killing process with pid 83087 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83087' 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83087 00:30:06.281 10:29:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83087 00:30:07.656 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:07.656 10:29:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:07.656 [2024-12-06 10:29:13.807958] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:07.656 [2024-12-06 10:29:13.808079] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83134 ] 00:30:07.913 [2024-12-06 10:29:13.965223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.913 [2024-12-06 10:29:14.039688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.294  [2024-12-06T10:29:16.402Z] Copying: 250/1024 [MB] (250 MBps) [2024-12-06T10:29:17.344Z] Copying: 499/1024 [MB] (249 MBps) [2024-12-06T10:29:18.727Z] Copying: 743/1024 [MB] (244 MBps) [2024-12-06T10:29:18.727Z] Copying: 1001/1024 [MB] (258 MBps) [2024-12-06T10:29:19.300Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:30:13.133 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:13.133 Calculate MD5 checksum, iteration 1 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.133 10:29:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:13.133 [2024-12-06 10:29:19.099912] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:13.133 [2024-12-06 10:29:19.100049] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83187 ] 00:30:13.133 [2024-12-06 10:29:19.258100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.394 [2024-12-06 10:29:19.339033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.778  [2024-12-06T10:29:21.525Z] Copying: 641/1024 [MB] (641 MBps) [2024-12-06T10:29:21.794Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:30:15.627 00:30:15.627 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:15.627 10:29:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:18.154 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:18.154 Fill FTL, iteration 2 00:30:18.154 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=eccfea2e5f972cd095226721b9dff40f 00:30:18.154 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:18.155 10:29:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:18.155 [2024-12-06 10:29:23.923705] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:18.155 [2024-12-06 10:29:23.923823] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:30:18.155 [2024-12-06 10:29:24.078403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.155 [2024-12-06 10:29:24.152761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.539  [2024-12-06T10:29:26.648Z] Copying: 254/1024 [MB] (254 MBps) [2024-12-06T10:29:27.590Z] Copying: 506/1024 [MB] (252 MBps) [2024-12-06T10:29:28.531Z] Copying: 758/1024 [MB] (252 MBps) [2024-12-06T10:29:28.531Z] Copying: 1012/1024 [MB] (254 MBps) [2024-12-06T10:29:29.100Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:30:22.933 00:30:22.933 Calculate MD5 checksum, iteration 2 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:22.933 10:29:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:23.191 [2024-12-06 10:29:29.135485] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:23.191 [2024-12-06 10:29:29.135572] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83302 ] 00:30:23.191 [2024-12-06 10:29:29.286161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.448 [2024-12-06 10:29:29.361862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.830  [2024-12-06T10:29:31.566Z] Copying: 626/1024 [MB] (626 MBps) [2024-12-06T10:29:32.508Z] Copying: 1024/1024 [MB] (average 629 MBps) 00:30:26.341 00:30:26.341 10:29:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:26.341 10:29:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=0b01df2827a4dae53d1b7e9250dee130 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:28.866 [2024-12-06 10:29:34.607164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.607215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:28.866 [2024-12-06 10:29:34.607228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:28.866 [2024-12-06 10:29:34.607235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.607253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.607263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:28.866 [2024-12-06 10:29:34.607270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:28.866 [2024-12-06 10:29:34.607276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.607291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.607298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:28.866 [2024-12-06 10:29:34.607305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.866 [2024-12-06 10:29:34.607310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.607364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.188 ms, result 0 00:30:28.866 true 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.866 { 00:30:28.866 "name": "ftl", 00:30:28.866 "properties": [ 00:30:28.866 { 00:30:28.866 "name": "superblock_version", 00:30:28.866 "value": 5, 00:30:28.866 "read-only": true 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "name": "base_device", 00:30:28.866 "bands": [ 00:30:28.866 { 00:30:28.866 "id": 0, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 1, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 2, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 3, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 4, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 5, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 6, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 7, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 8, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 9, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 10, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 11, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 12, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 13, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 14, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 15, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 16, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 17, 00:30:28.866 "state": "FREE", 00:30:28.866 "validity": 0.0 00:30:28.866 } 00:30:28.866 ], 00:30:28.866 "read-only": true 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "name": "cache_device", 00:30:28.866 "type": "bdev", 00:30:28.866 "chunks": [ 00:30:28.866 { 00:30:28.866 "id": 0, 00:30:28.866 "state": "INACTIVE", 00:30:28.866 "utilization": 0.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 1, 00:30:28.866 "state": "CLOSED", 00:30:28.866 "utilization": 1.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 2, 00:30:28.866 "state": "CLOSED", 00:30:28.866 "utilization": 1.0 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 3, 00:30:28.866 "state": "OPEN", 00:30:28.866 "utilization": 0.001953125 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "id": 4, 00:30:28.866 "state": "OPEN", 00:30:28.866 "utilization": 0.0 00:30:28.866 } 00:30:28.866 ], 00:30:28.866 "read-only": true 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "name": "verbose_mode", 00:30:28.866 "value": true, 00:30:28.866 "unit": "", 00:30:28.866 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:28.866 }, 00:30:28.866 { 00:30:28.866 "name": "prep_upgrade_on_shutdown", 00:30:28.866 "value": false, 00:30:28.866 "unit": "", 00:30:28.866 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:28.866 } 00:30:28.866 ] 00:30:28.866 } 00:30:28.866 10:29:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:28.866 [2024-12-06 10:29:34.991588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.991750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:28.866 [2024-12-06 10:29:34.991814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:28.866 [2024-12-06 10:29:34.991838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.991880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.991902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:28.866 [2024-12-06 10:29:34.991922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.866 [2024-12-06 10:29:34.991940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.991970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.866 [2024-12-06 10:29:34.991990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:28.866 [2024-12-06 10:29:34.992010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:28.866 [2024-12-06 10:29:34.992061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.866 [2024-12-06 10:29:34.992153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.550 ms, result 0 00:30:28.866 true 00:30:28.866 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:28.866 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:28.866 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:29.125 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:29.125 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:29.125 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:29.385 [2024-12-06 10:29:35.359936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.385 [2024-12-06 10:29:35.360063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:29.385 [2024-12-06 10:29:35.360113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:29.385 [2024-12-06 10:29:35.360134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.385 [2024-12-06 10:29:35.360171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.385 [2024-12-06 10:29:35.360201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:29.385 [2024-12-06 10:29:35.360220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:29.385 [2024-12-06 10:29:35.360238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.385 [2024-12-06 10:29:35.360268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.385 [2024-12-06 10:29:35.360288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:29.385 [2024-12-06 10:29:35.360308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:29.385 [2024-12-06 10:29:35.360351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.385 [2024-12-06 10:29:35.360424] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.473 ms, result 0 00:30:29.385 true 00:30:29.385 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:29.661 { 00:30:29.661 "name": "ftl", 00:30:29.661 "properties": [ 00:30:29.661 { 00:30:29.661 "name": "superblock_version", 00:30:29.661 "value": 5, 00:30:29.661 "read-only": true 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "name": "base_device", 00:30:29.661 "bands": [ 00:30:29.661 { 00:30:29.661 "id": 0, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 1, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 2, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 3, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 4, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 5, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 6, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 7, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 8, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 9, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 10, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 11, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 12, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 13, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 14, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 15, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 16, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 17, 00:30:29.661 "state": "FREE", 00:30:29.661 "validity": 0.0 00:30:29.661 } 00:30:29.661 ], 00:30:29.661 "read-only": true 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "name": "cache_device", 00:30:29.661 "type": "bdev", 00:30:29.661 "chunks": [ 00:30:29.661 { 00:30:29.661 "id": 0, 00:30:29.661 "state": "INACTIVE", 00:30:29.661 "utilization": 0.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 1, 00:30:29.661 "state": "CLOSED", 00:30:29.661 "utilization": 1.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 2, 00:30:29.661 "state": "CLOSED", 00:30:29.661 "utilization": 1.0 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 3, 00:30:29.661 "state": "OPEN", 00:30:29.661 "utilization": 0.001953125 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "id": 4, 00:30:29.661 "state": "OPEN", 00:30:29.661 "utilization": 0.0 00:30:29.661 } 00:30:29.661 ], 00:30:29.661 "read-only": true 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "name": "verbose_mode", 00:30:29.661 "value": true, 00:30:29.661 "unit": "", 00:30:29.661 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:29.661 }, 00:30:29.661 { 00:30:29.661 "name": "prep_upgrade_on_shutdown", 00:30:29.661 "value": true, 00:30:29.661 "unit": "", 00:30:29.661 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:29.661 } 00:30:29.661 ] 00:30:29.661 } 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82956 ]] 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82956 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82956 ']' 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82956 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82956 00:30:29.661 killing process with pid 82956 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82956' 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82956 00:30:29.661 10:29:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82956 00:30:30.250 [2024-12-06 10:29:36.296707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:30.250 [2024-12-06 10:29:36.308723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.250 [2024-12-06 10:29:36.308754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:30.250 [2024-12-06 10:29:36.308764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:30.250 [2024-12-06 10:29:36.308770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:30.250 [2024-12-06 10:29:36.308787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:30.250 [2024-12-06 10:29:36.310886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:30.250 [2024-12-06 10:29:36.310910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:30.250 [2024-12-06 10:29:36.310918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.089 ms 00:30:30.250 [2024-12-06 10:29:36.310928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.738993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.739046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:38.379 [2024-12-06 10:29:43.739058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7428.022 ms 00:30:38.379 [2024-12-06 10:29:43.739065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.740100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.740114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:38.379 [2024-12-06 10:29:43.740121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.023 ms 00:30:38.379 [2024-12-06 10:29:43.740127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.741014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.741028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:38.379 [2024-12-06 10:29:43.741040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.865 ms 00:30:38.379 [2024-12-06 10:29:43.741046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.748624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.748651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:38.379 [2024-12-06 10:29:43.748658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.554 ms 00:30:38.379 [2024-12-06 10:29:43.748664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.754101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.754129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:38.379 [2024-12-06 10:29:43.754137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.412 ms 00:30:38.379 [2024-12-06 10:29:43.754144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.754198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.754209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:38.379 [2024-12-06 10:29:43.754216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:38.379 [2024-12-06 10:29:43.754221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.761495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.761613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:38.379 [2024-12-06 10:29:43.761625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.262 ms 00:30:38.379 [2024-12-06 10:29:43.761631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.768595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.768685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:38.379 [2024-12-06 10:29:43.768696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.940 ms 00:30:38.379 [2024-12-06 10:29:43.768702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.776113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.776216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:38.379 [2024-12-06 10:29:43.776227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.389 ms 00:30:38.379 [2024-12-06 10:29:43.776233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.783311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.379 [2024-12-06 10:29:43.783403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:38.379 [2024-12-06 10:29:43.783414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.023 ms 00:30:38.379 [2024-12-06 10:29:43.783420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.379 [2024-12-06 10:29:43.783441] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:38.379 [2024-12-06 10:29:43.783472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:38.379 [2024-12-06 10:29:43.783479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:38.379 [2024-12-06 10:29:43.783486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:38.379 [2024-12-06 10:29:43.783492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:38.379 [2024-12-06 10:29:43.783498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:38.379 [2024-12-06 10:29:43.783504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:38.379 [2024-12-06 10:29:43.783509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:38.379 [2024-12-06 10:29:43.783515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:38.380 [2024-12-06 10:29:43.783578] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:38.380 [2024-12-06 10:29:43.783584] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 471f812c-23f7-41c3-b211-e349b3c88310 00:30:38.380 [2024-12-06 10:29:43.783590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:38.380 [2024-12-06 10:29:43.783596] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:38.380 [2024-12-06 10:29:43.783601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:38.380 [2024-12-06 10:29:43.783609] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:38.380 [2024-12-06 10:29:43.783616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:38.380 [2024-12-06 10:29:43.783622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:38.380 [2024-12-06 10:29:43.783627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:38.380 [2024-12-06 10:29:43.783633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:38.380 [2024-12-06 10:29:43.783638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:38.380 [2024-12-06 10:29:43.783644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.380 [2024-12-06 10:29:43.783650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:38.380 [2024-12-06 10:29:43.783657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.204 ms 00:30:38.380 [2024-12-06 10:29:43.783662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.793217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.380 [2024-12-06 10:29:43.793244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:38.380 [2024-12-06 10:29:43.793251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.542 ms 00:30:38.380 [2024-12-06 10:29:43.793258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.793538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.380 [2024-12-06 10:29:43.793550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:38.380 [2024-12-06 10:29:43.793557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:30:38.380 [2024-12-06 10:29:43.793562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.826575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.826603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:38.380 [2024-12-06 10:29:43.826611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.826617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.826639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.826645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:38.380 [2024-12-06 10:29:43.826651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.826657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.826702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.826709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:38.380 [2024-12-06 10:29:43.826718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.826724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.826735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.826742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:38.380 [2024-12-06 10:29:43.826747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.826753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.885497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.885623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:38.380 [2024-12-06 10:29:43.885636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.885643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:38.380 [2024-12-06 10:29:43.933533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:38.380 [2024-12-06 10:29:43.933614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:38.380 [2024-12-06 10:29:43.933668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:38.380 [2024-12-06 10:29:43.933757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:38.380 [2024-12-06 10:29:43.933802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:38.380 [2024-12-06 10:29:43.933849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:38.380 [2024-12-06 10:29:43.933896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:38.380 [2024-12-06 10:29:43.933902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:38.380 [2024-12-06 10:29:43.933908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.380 [2024-12-06 10:29:43.933999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7625.230 ms, result 0 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83483 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83483 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83483 ']' 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.953 10:29:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:38.953 [2024-12-06 10:29:45.029015] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:38.953 [2024-12-06 10:29:45.029256] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83483 ] 00:30:39.214 [2024-12-06 10:29:45.186194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.214 [2024-12-06 10:29:45.261250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.785 [2024-12-06 10:29:45.831774] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:39.785 [2024-12-06 10:29:45.831949] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:40.047 [2024-12-06 10:29:45.974840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.047 [2024-12-06 10:29:45.974965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:40.047 [2024-12-06 10:29:45.974981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:40.047 [2024-12-06 10:29:45.974989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.047 [2024-12-06 10:29:45.975037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.047 [2024-12-06 10:29:45.975045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:40.047 [2024-12-06 10:29:45.975053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:40.047 [2024-12-06 10:29:45.975059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.975082] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:40.048 [2024-12-06 10:29:45.975621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:40.048 [2024-12-06 10:29:45.975634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.975640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:40.048 [2024-12-06 10:29:45.975647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:30:40.048 [2024-12-06 10:29:45.975652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.976720] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:40.048 [2024-12-06 10:29:45.986209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.986326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:40.048 [2024-12-06 10:29:45.986340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.490 ms 00:30:40.048 [2024-12-06 10:29:45.986346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.986388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.986396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:40.048 [2024-12-06 10:29:45.986403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:40.048 [2024-12-06 10:29:45.986408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.990762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.990788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:40.048 [2024-12-06 10:29:45.990795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.291 ms 00:30:40.048 [2024-12-06 10:29:45.990801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.990844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.990851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:40.048 [2024-12-06 10:29:45.990858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:40.048 [2024-12-06 10:29:45.990863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.990899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.990908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:40.048 [2024-12-06 10:29:45.990915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:40.048 [2024-12-06 10:29:45.990920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.990936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:40.048 [2024-12-06 10:29:45.993698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.993723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:40.048 [2024-12-06 10:29:45.993729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.765 ms 00:30:40.048 [2024-12-06 10:29:45.993735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.993810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.993817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:40.048 [2024-12-06 10:29:45.993823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:40.048 [2024-12-06 10:29:45.993829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.993844] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:40.048 [2024-12-06 10:29:45.993860] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:40.048 [2024-12-06 10:29:45.993886] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:40.048 [2024-12-06 10:29:45.993897] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:40.048 [2024-12-06 10:29:45.993976] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:40.048 [2024-12-06 10:29:45.993983] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:40.048 [2024-12-06 10:29:45.993991] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:40.048 [2024-12-06 10:29:45.993999] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994007] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994013] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:40.048 [2024-12-06 10:29:45.994018] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:40.048 [2024-12-06 10:29:45.994024] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:40.048 [2024-12-06 10:29:45.994030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:40.048 [2024-12-06 10:29:45.994035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.994041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:40.048 [2024-12-06 10:29:45.994047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:30:40.048 [2024-12-06 10:29:45.994052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.994117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.048 [2024-12-06 10:29:45.994123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:40.048 [2024-12-06 10:29:45.994130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:40.048 [2024-12-06 10:29:45.994136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.048 [2024-12-06 10:29:45.994210] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:40.048 [2024-12-06 10:29:45.994218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:40.048 [2024-12-06 10:29:45.994224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:40.048 [2024-12-06 10:29:45.994240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:40.048 [2024-12-06 10:29:45.994250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:40.048 [2024-12-06 10:29:45.994256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:40.048 [2024-12-06 10:29:45.994262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:40.048 [2024-12-06 10:29:45.994272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:40.048 [2024-12-06 10:29:45.994277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:40.048 [2024-12-06 10:29:45.994287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:40.048 [2024-12-06 10:29:45.994292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:40.048 [2024-12-06 10:29:45.994302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:40.048 [2024-12-06 10:29:45.994307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:40.048 [2024-12-06 10:29:45.994319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:40.048 [2024-12-06 10:29:45.994324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:40.048 [2024-12-06 10:29:45.994338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:40.048 [2024-12-06 10:29:45.994343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:40.048 [2024-12-06 10:29:45.994353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:40.048 [2024-12-06 10:29:45.994357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:40.048 [2024-12-06 10:29:45.994367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:40.048 [2024-12-06 10:29:45.994372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:40.048 [2024-12-06 10:29:45.994382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:40.048 [2024-12-06 10:29:45.994387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:40.048 [2024-12-06 10:29:45.994396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:40.048 [2024-12-06 10:29:45.994411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:40.048 [2024-12-06 10:29:45.994425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:40.048 [2024-12-06 10:29:45.994430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:40.048 [2024-12-06 10:29:45.994441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:40.048 [2024-12-06 10:29:45.994461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:40.048 [2024-12-06 10:29:45.994469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.048 [2024-12-06 10:29:45.994475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:40.049 [2024-12-06 10:29:45.994480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:40.049 [2024-12-06 10:29:45.994485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:40.049 [2024-12-06 10:29:45.994491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:40.049 [2024-12-06 10:29:45.994497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:40.049 [2024-12-06 10:29:45.994503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:40.049 [2024-12-06 10:29:45.994509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:40.049 [2024-12-06 10:29:45.994516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:40.049 [2024-12-06 10:29:45.994528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:40.049 [2024-12-06 10:29:45.994544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:40.049 [2024-12-06 10:29:45.994550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:40.049 [2024-12-06 10:29:45.994555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:40.049 [2024-12-06 10:29:45.994560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:40.049 [2024-12-06 10:29:45.994597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:40.049 [2024-12-06 10:29:45.994603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994608] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:40.049 [2024-12-06 10:29:45.994614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:40.049 [2024-12-06 10:29:45.994619] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:40.049 [2024-12-06 10:29:45.994625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:40.049 [2024-12-06 10:29:45.994630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.049 [2024-12-06 10:29:45.994635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:40.049 [2024-12-06 10:29:45.994641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.473 ms 00:30:40.049 [2024-12-06 10:29:45.994646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.049 [2024-12-06 10:29:45.994678] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:40.049 [2024-12-06 10:29:45.994690] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:44.252 [2024-12-06 10:29:49.898226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.898554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:44.252 [2024-12-06 10:29:49.898583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3903.531 ms 00:30:44.252 [2024-12-06 10:29:49.898594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.252 [2024-12-06 10:29:49.929848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.930055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:44.252 [2024-12-06 10:29:49.930076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.001 ms 00:30:44.252 [2024-12-06 10:29:49.930086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.252 [2024-12-06 10:29:49.930193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.930204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:44.252 [2024-12-06 10:29:49.930214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:44.252 [2024-12-06 10:29:49.930222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.252 [2024-12-06 10:29:49.965155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.965355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:44.252 [2024-12-06 10:29:49.965382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.893 ms 00:30:44.252 [2024-12-06 10:29:49.965391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.252 [2024-12-06 10:29:49.965432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.965442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:44.252 [2024-12-06 10:29:49.965474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.252 [2024-12-06 10:29:49.965483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.252 [2024-12-06 10:29:49.966081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.252 [2024-12-06 10:29:49.966106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:44.253 [2024-12-06 10:29:49.966118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.538 ms 00:30:44.253 [2024-12-06 10:29:49.966134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:49.966189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:49.966200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:44.253 [2024-12-06 10:29:49.966210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:44.253 [2024-12-06 10:29:49.966217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:49.984171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:49.984370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:44.253 [2024-12-06 10:29:49.984389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.931 ms 00:30:44.253 [2024-12-06 10:29:49.984398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.013660] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:44.253 [2024-12-06 10:29:50.013728] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:44.253 [2024-12-06 10:29:50.013748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.013759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:44.253 [2024-12-06 10:29:50.013771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.169 ms 00:30:44.253 [2024-12-06 10:29:50.013780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.029819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.029874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:44.253 [2024-12-06 10:29:50.029887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.966 ms 00:30:44.253 [2024-12-06 10:29:50.029896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.043144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.043215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:44.253 [2024-12-06 10:29:50.043228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.185 ms 00:30:44.253 [2024-12-06 10:29:50.043236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.056098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.056152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:44.253 [2024-12-06 10:29:50.056166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.806 ms 00:30:44.253 [2024-12-06 10:29:50.056174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.056946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.056977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:44.253 [2024-12-06 10:29:50.056988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:30:44.253 [2024-12-06 10:29:50.056995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.124352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.124706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:44.253 [2024-12-06 10:29:50.124734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 67.332 ms 00:30:44.253 [2024-12-06 10:29:50.124744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.136801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:44.253 [2024-12-06 10:29:50.138034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.138083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:44.253 [2024-12-06 10:29:50.138096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.143 ms 00:30:44.253 [2024-12-06 10:29:50.138105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.138230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.138243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:44.253 [2024-12-06 10:29:50.138254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:44.253 [2024-12-06 10:29:50.138262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.138326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.138338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:44.253 [2024-12-06 10:29:50.138347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:44.253 [2024-12-06 10:29:50.138355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.138379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.138392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:44.253 [2024-12-06 10:29:50.138401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.253 [2024-12-06 10:29:50.138410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.138484] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:44.253 [2024-12-06 10:29:50.138497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.138506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:44.253 [2024-12-06 10:29:50.138515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:44.253 [2024-12-06 10:29:50.138524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.165123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.165185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:44.253 [2024-12-06 10:29:50.165200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.573 ms 00:30:44.253 [2024-12-06 10:29:50.165209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.165306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.253 [2024-12-06 10:29:50.165316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:44.253 [2024-12-06 10:29:50.165326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:30:44.253 [2024-12-06 10:29:50.165334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.253 [2024-12-06 10:29:50.167030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4191.649 ms, result 0 00:30:44.253 [2024-12-06 10:29:50.181612] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.253 [2024-12-06 10:29:50.197612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:44.253 [2024-12-06 10:29:50.205919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.253 10:29:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.253 10:29:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:44.253 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.253 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:44.253 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:44.514 [2024-12-06 10:29:50.441903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.514 [2024-12-06 10:29:50.441964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:44.514 [2024-12-06 10:29:50.441984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:44.514 [2024-12-06 10:29:50.441992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.514 [2024-12-06 10:29:50.442018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.514 [2024-12-06 10:29:50.442026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:44.514 [2024-12-06 10:29:50.442035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:44.514 [2024-12-06 10:29:50.442043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.514 [2024-12-06 10:29:50.442064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.514 [2024-12-06 10:29:50.442073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:44.514 [2024-12-06 10:29:50.442082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:44.514 [2024-12-06 10:29:50.442093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.514 [2024-12-06 10:29:50.442153] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.242 ms, result 0 00:30:44.514 true 00:30:44.514 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:44.514 { 00:30:44.514 "name": "ftl", 00:30:44.514 "properties": [ 00:30:44.514 { 00:30:44.514 "name": "superblock_version", 00:30:44.514 "value": 5, 00:30:44.514 "read-only": true 00:30:44.514 }, 00:30:44.514 { 00:30:44.514 "name": "base_device", 00:30:44.514 "bands": [ 00:30:44.514 { 00:30:44.514 "id": 0, 00:30:44.514 "state": "CLOSED", 00:30:44.514 "validity": 1.0 00:30:44.514 }, 00:30:44.515 { 00:30:44.515 "id": 1, 00:30:44.515 "state": "CLOSED", 00:30:44.515 "validity": 1.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 2, 00:30:44.515 "state": "CLOSED", 00:30:44.515 "validity": 0.007843137254901933 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 3, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 4, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 5, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 6, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 7, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 8, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 9, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 10, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 11, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 12, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 13, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 14, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 15, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 16, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 17, 00:30:44.515 "state": "FREE", 00:30:44.515 "validity": 0.0 00:30:44.515 } 00:30:44.515 ], 00:30:44.515 "read-only": true 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "name": "cache_device", 00:30:44.515 "type": "bdev", 00:30:44.515 "chunks": [ 00:30:44.515 { 00:30:44.515 "id": 0, 00:30:44.515 "state": "INACTIVE", 00:30:44.515 "utilization": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 1, 00:30:44.515 "state": "OPEN", 00:30:44.515 "utilization": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 2, 00:30:44.515 "state": "OPEN", 00:30:44.515 "utilization": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 3, 00:30:44.515 "state": "FREE", 00:30:44.515 "utilization": 0.0 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "id": 4, 00:30:44.515 "state": "FREE", 00:30:44.515 "utilization": 0.0 00:30:44.515 } 00:30:44.515 ], 00:30:44.515 "read-only": true 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "name": "verbose_mode", 00:30:44.515 "value": true, 00:30:44.515 "unit": "", 00:30:44.515 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:44.515 }, 00:30:44.515 { 00:30:44.515 "name": "prep_upgrade_on_shutdown", 00:30:44.515 "value": false, 00:30:44.515 "unit": "", 00:30:44.515 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:44.515 } 00:30:44.515 ] 00:30:44.515 } 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:44.776 10:29:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:45.038 Validate MD5 checksum, iteration 1 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:45.038 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:45.039 10:29:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:45.300 [2024-12-06 10:29:51.204906] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:45.300 [2024-12-06 10:29:51.205034] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83563 ] 00:30:45.300 [2024-12-06 10:29:51.370488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.562 [2024-12-06 10:29:51.493869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.948  [2024-12-06T10:29:54.058Z] Copying: 641/1024 [MB] (641 MBps) [2024-12-06T10:29:55.001Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:30:48.834 00:30:49.094 10:29:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:49.094 10:29:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:51.642 Validate MD5 checksum, iteration 2 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=eccfea2e5f972cd095226721b9dff40f 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ eccfea2e5f972cd095226721b9dff40f != \e\c\c\f\e\a\2\e\5\f\9\7\2\c\d\0\9\5\2\2\6\7\2\1\b\9\d\f\f\4\0\f ]] 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:51.642 10:29:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.642 [2024-12-06 10:29:57.252335] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:30:51.642 [2024-12-06 10:29:57.252644] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83630 ] 00:30:51.642 [2024-12-06 10:29:57.412096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.642 [2024-12-06 10:29:57.505268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.027  [2024-12-06T10:29:59.763Z] Copying: 623/1024 [MB] (623 MBps) [2024-12-06T10:30:05.049Z] Copying: 1024/1024 [MB] (average 625 MBps) 00:30:58.882 00:30:58.882 10:30:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:58.882 10:30:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0b01df2827a4dae53d1b7e9250dee130 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0b01df2827a4dae53d1b7e9250dee130 != \0\b\0\1\d\f\2\8\2\7\a\4\d\a\e\5\3\d\1\b\7\e\9\2\5\0\d\e\e\1\3\0 ]] 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83483 ]] 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83483 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83727 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83727 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83727 ']' 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:00.260 10:30:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:00.520 [2024-12-06 10:30:06.488403] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:00.520 [2024-12-06 10:30:06.489042] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83727 ] 00:31:00.520 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83483 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:00.520 [2024-12-06 10:30:06.648209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.781 [2024-12-06 10:30:06.723998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.355 [2024-12-06 10:30:07.290247] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:01.355 [2024-12-06 10:30:07.290412] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:01.355 [2024-12-06 10:30:07.433049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.433251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:01.355 [2024-12-06 10:30:07.433301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:01.355 [2024-12-06 10:30:07.433338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.433420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.433472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:01.355 [2024-12-06 10:30:07.433578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:01.355 [2024-12-06 10:30:07.433631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.433681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:01.355 [2024-12-06 10:30:07.434208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:01.355 [2024-12-06 10:30:07.434319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.434343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:01.355 [2024-12-06 10:30:07.434361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.647 ms 00:31:01.355 [2024-12-06 10:30:07.434431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.434676] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:01.355 [2024-12-06 10:30:07.447113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.447255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:01.355 [2024-12-06 10:30:07.447342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.438 ms 00:31:01.355 [2024-12-06 10:30:07.447395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.454136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.454221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:01.355 [2024-12-06 10:30:07.454315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:31:01.355 [2024-12-06 10:30:07.454334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.454791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.454934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:01.355 [2024-12-06 10:30:07.455018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.280 ms 00:31:01.355 [2024-12-06 10:30:07.455053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.355 [2024-12-06 10:30:07.455122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.355 [2024-12-06 10:30:07.455194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:01.356 [2024-12-06 10:30:07.455237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:31:01.356 [2024-12-06 10:30:07.455280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.455328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.356 [2024-12-06 10:30:07.455371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:01.356 [2024-12-06 10:30:07.455482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:01.356 [2024-12-06 10:30:07.455519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.455562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:01.356 [2024-12-06 10:30:07.457834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.356 [2024-12-06 10:30:07.457928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:01.356 [2024-12-06 10:30:07.458012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.276 ms 00:31:01.356 [2024-12-06 10:30:07.458061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.458120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.356 [2024-12-06 10:30:07.458162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:01.356 [2024-12-06 10:30:07.458241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:01.356 [2024-12-06 10:30:07.458284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.458346] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:01.356 [2024-12-06 10:30:07.458401] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:01.356 [2024-12-06 10:30:07.458566] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:01.356 [2024-12-06 10:30:07.458660] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:01.356 [2024-12-06 10:30:07.458818] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:01.356 [2024-12-06 10:30:07.458912] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:01.356 [2024-12-06 10:30:07.458965] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:01.356 [2024-12-06 10:30:07.459042] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:01.356 [2024-12-06 10:30:07.459147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:01.356 [2024-12-06 10:30:07.459202] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:01.356 [2024-12-06 10:30:07.459244] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:01.356 [2024-12-06 10:30:07.459318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:01.356 [2024-12-06 10:30:07.459363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:01.356 [2024-12-06 10:30:07.459406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.356 [2024-12-06 10:30:07.459495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:01.356 [2024-12-06 10:30:07.459538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.062 ms 00:31:01.356 [2024-12-06 10:30:07.459612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.459721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.356 [2024-12-06 10:30:07.459764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:01.356 [2024-12-06 10:30:07.459838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:01.356 [2024-12-06 10:30:07.459881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.356 [2024-12-06 10:30:07.460029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:01.356 [2024-12-06 10:30:07.460090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:01.356 [2024-12-06 10:30:07.460148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:01.356 [2024-12-06 10:30:07.460220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.460274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:01.356 [2024-12-06 10:30:07.460314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.460394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:01.356 [2024-12-06 10:30:07.460435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:01.356 [2024-12-06 10:30:07.460488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:01.356 [2024-12-06 10:30:07.460555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.460626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:01.356 [2024-12-06 10:30:07.460671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:01.356 [2024-12-06 10:30:07.460734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.460806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:01.356 [2024-12-06 10:30:07.460848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:01.356 [2024-12-06 10:30:07.460913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.460981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:01.356 [2024-12-06 10:30:07.461023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:01.356 [2024-12-06 10:30:07.461087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.461156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:01.356 [2024-12-06 10:30:07.461199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:01.356 [2024-12-06 10:30:07.461271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:01.356 [2024-12-06 10:30:07.461333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:01.356 [2024-12-06 10:30:07.461378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:01.356 [2024-12-06 10:30:07.461474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:01.356 [2024-12-06 10:30:07.461517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:01.356 [2024-12-06 10:30:07.461564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:01.356 [2024-12-06 10:30:07.461625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:01.356 [2024-12-06 10:30:07.461682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:01.356 [2024-12-06 10:30:07.461763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:01.356 [2024-12-06 10:30:07.461809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:01.356 [2024-12-06 10:30:07.461875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:01.356 [2024-12-06 10:30:07.461923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:01.356 [2024-12-06 10:30:07.461969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:01.356 [2024-12-06 10:30:07.462132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:01.356 [2024-12-06 10:30:07.462181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:01.356 [2024-12-06 10:30:07.462304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:01.356 [2024-12-06 10:30:07.462482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:01.356 [2024-12-06 10:30:07.462563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462606] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:01.356 [2024-12-06 10:30:07.462689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:01.356 [2024-12-06 10:30:07.462708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:01.356 [2024-12-06 10:30:07.462723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:01.356 [2024-12-06 10:30:07.462812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:01.356 [2024-12-06 10:30:07.462852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:01.356 [2024-12-06 10:30:07.462893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:01.356 [2024-12-06 10:30:07.462900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:01.356 [2024-12-06 10:30:07.462906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:01.356 [2024-12-06 10:30:07.462911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:01.356 [2024-12-06 10:30:07.462918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:01.356 [2024-12-06 10:30:07.462926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.356 [2024-12-06 10:30:07.462932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:01.356 [2024-12-06 10:30:07.462938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:01.356 [2024-12-06 10:30:07.462944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:01.356 [2024-12-06 10:30:07.462949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:01.356 [2024-12-06 10:30:07.462955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:01.356 [2024-12-06 10:30:07.462960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:01.356 [2024-12-06 10:30:07.462966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:01.356 [2024-12-06 10:30:07.462971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:01.356 [2024-12-06 10:30:07.462977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.462982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.462988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.462993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.462999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.463005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:01.357 [2024-12-06 10:30:07.463010] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:01.357 [2024-12-06 10:30:07.463019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.463025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.357 [2024-12-06 10:30:07.463030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:01.357 [2024-12-06 10:30:07.463036] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:01.357 [2024-12-06 10:30:07.463041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:01.357 [2024-12-06 10:30:07.463047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.463053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:01.357 [2024-12-06 10:30:07.463059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.071 ms 00:31:01.357 [2024-12-06 10:30:07.463065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.482462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.482487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:01.357 [2024-12-06 10:30:07.482496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.344 ms 00:31:01.357 [2024-12-06 10:30:07.482503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.482532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.482538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:01.357 [2024-12-06 10:30:07.482545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:01.357 [2024-12-06 10:30:07.482550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.506692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.506714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:01.357 [2024-12-06 10:30:07.506722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.100 ms 00:31:01.357 [2024-12-06 10:30:07.506728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.506747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.506753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:01.357 [2024-12-06 10:30:07.506760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:01.357 [2024-12-06 10:30:07.506768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.506833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.506841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:01.357 [2024-12-06 10:30:07.506848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:01.357 [2024-12-06 10:30:07.506854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.506883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.506889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:01.357 [2024-12-06 10:30:07.506895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:31:01.357 [2024-12-06 10:30:07.506901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.518200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.518222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:01.357 [2024-12-06 10:30:07.518230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.280 ms 00:31:01.357 [2024-12-06 10:30:07.518238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.357 [2024-12-06 10:30:07.518305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.357 [2024-12-06 10:30:07.518314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:01.357 [2024-12-06 10:30:07.518320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:01.357 [2024-12-06 10:30:07.518326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.546555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.546594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:01.617 [2024-12-06 10:30:07.546608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.214 ms 00:31:01.617 [2024-12-06 10:30:07.546618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.554451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.554481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:01.617 [2024-12-06 10:30:07.554489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.390 ms 00:31:01.617 [2024-12-06 10:30:07.554495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.598135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.598175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:01.617 [2024-12-06 10:30:07.598184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.599 ms 00:31:01.617 [2024-12-06 10:30:07.598191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.598296] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:01.617 [2024-12-06 10:30:07.598368] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:01.617 [2024-12-06 10:30:07.598439] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:01.617 [2024-12-06 10:30:07.598520] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:01.617 [2024-12-06 10:30:07.598528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.598535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:01.617 [2024-12-06 10:30:07.598542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:31:01.617 [2024-12-06 10:30:07.598548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.598590] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:01.617 [2024-12-06 10:30:07.598601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.598607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:01.617 [2024-12-06 10:30:07.598613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:01.617 [2024-12-06 10:30:07.598621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.610456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.610487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:01.617 [2024-12-06 10:30:07.610496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.818 ms 00:31:01.617 [2024-12-06 10:30:07.610502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.616774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.616809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:01.617 [2024-12-06 10:30:07.616817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:01.617 [2024-12-06 10:30:07.616823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.617 [2024-12-06 10:30:07.616888] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:01.617 [2024-12-06 10:30:07.616997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.617 [2024-12-06 10:30:07.617007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:01.617 [2024-12-06 10:30:07.617014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.110 ms 00:31:01.617 [2024-12-06 10:30:07.617019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.190 [2024-12-06 10:30:08.238432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.190 [2024-12-06 10:30:08.238511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:02.190 [2024-12-06 10:30:08.238526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 620.795 ms 00:31:02.190 [2024-12-06 10:30:08.238535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.190 [2024-12-06 10:30:08.242945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.190 [2024-12-06 10:30:08.242983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:02.190 [2024-12-06 10:30:08.242993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.422 ms 00:31:02.190 [2024-12-06 10:30:08.243006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.190 [2024-12-06 10:30:08.243722] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:02.190 [2024-12-06 10:30:08.243752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.190 [2024-12-06 10:30:08.243761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:02.190 [2024-12-06 10:30:08.243770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.719 ms 00:31:02.190 [2024-12-06 10:30:08.243777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.190 [2024-12-06 10:30:08.243808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.190 [2024-12-06 10:30:08.243816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:02.190 [2024-12-06 10:30:08.243824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:02.190 [2024-12-06 10:30:08.243836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:02.190 [2024-12-06 10:30:08.243869] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 626.980 ms, result 0 00:31:02.190 [2024-12-06 10:30:08.243905] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:02.190 [2024-12-06 10:30:08.243985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:02.190 [2024-12-06 10:30:08.243995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:02.190 [2024-12-06 10:30:08.244003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:31:02.190 [2024-12-06 10:30:08.244010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.131 [2024-12-06 10:30:08.979742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.131 [2024-12-06 10:30:08.979820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:03.131 [2024-12-06 10:30:08.979852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 734.723 ms 00:31:03.131 [2024-12-06 10:30:08.979861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.131 [2024-12-06 10:30:08.984899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.131 [2024-12-06 10:30:08.984947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:03.131 [2024-12-06 10:30:08.984959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.964 ms 00:31:03.131 [2024-12-06 10:30:08.984968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.131 [2024-12-06 10:30:08.985824] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:03.131 [2024-12-06 10:30:08.985873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.131 [2024-12-06 10:30:08.985882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:03.131 [2024-12-06 10:30:08.985893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:31:03.131 [2024-12-06 10:30:08.985901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.131 [2024-12-06 10:30:08.985940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.131 [2024-12-06 10:30:08.985950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:03.131 [2024-12-06 10:30:08.985959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:03.131 [2024-12-06 10:30:08.985966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.131 [2024-12-06 10:30:08.986007] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 742.091 ms, result 0 00:31:03.131 [2024-12-06 10:30:08.986053] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:03.131 [2024-12-06 10:30:08.986067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:03.131 [2024-12-06 10:30:08.986078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:08.986086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:03.132 [2024-12-06 10:30:08.986096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1369.206 ms 00:31:03.132 [2024-12-06 10:30:08.986104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:08.986133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:08.986146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:03.132 [2024-12-06 10:30:08.986155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:03.132 [2024-12-06 10:30:08.986163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:08.998782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:03.132 [2024-12-06 10:30:08.998924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:08.998937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:03.132 [2024-12-06 10:30:08.998949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.744 ms 00:31:03.132 [2024-12-06 10:30:08.998957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:08.999705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:08.999740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:03.132 [2024-12-06 10:30:08.999751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.657 ms 00:31:03.132 [2024-12-06 10:30:08.999759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:03.132 [2024-12-06 10:30:09.002045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.228 ms 00:31:03.132 [2024-12-06 10:30:09.002053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:03.132 [2024-12-06 10:30:09.002121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:03.132 [2024-12-06 10:30:09.002130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:03.132 [2024-12-06 10:30:09.002260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:03.132 [2024-12-06 10:30:09.002268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:03.132 [2024-12-06 10:30:09.002309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:03.132 [2024-12-06 10:30:09.002317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002356] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:03.132 [2024-12-06 10:30:09.002368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:03.132 [2024-12-06 10:30:09.002384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:31:03.132 [2024-12-06 10:30:09.002392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.002443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.132 [2024-12-06 10:30:09.002468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:03.132 [2024-12-06 10:30:09.002481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:03.132 [2024-12-06 10:30:09.002489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.132 [2024-12-06 10:30:09.003733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1570.156 ms, result 0 00:31:03.132 [2024-12-06 10:30:09.019402] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.132 [2024-12-06 10:30:09.035383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:03.132 [2024-12-06 10:30:09.044390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:03.132 Validate MD5 checksum, iteration 1 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:03.132 10:30:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:03.132 [2024-12-06 10:30:09.146826] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:03.132 [2024-12-06 10:30:09.146929] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83766 ] 00:31:03.392 [2024-12-06 10:30:09.303642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.392 [2024-12-06 10:30:09.401391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.776  [2024-12-06T10:30:11.885Z] Copying: 591/1024 [MB] (591 MBps) [2024-12-06T10:30:13.273Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:31:07.106 00:31:07.106 10:30:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:07.106 10:30:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:09.653 Validate MD5 checksum, iteration 2 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=eccfea2e5f972cd095226721b9dff40f 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ eccfea2e5f972cd095226721b9dff40f != \e\c\c\f\e\a\2\e\5\f\9\7\2\c\d\0\9\5\2\2\6\7\2\1\b\9\d\f\f\4\0\f ]] 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:09.653 10:30:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:09.653 [2024-12-06 10:30:15.287308] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:09.653 [2024-12-06 10:30:15.287412] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83834 ] 00:31:09.653 [2024-12-06 10:30:15.447075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.653 [2024-12-06 10:30:15.539154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.099  [2024-12-06T10:30:17.867Z] Copying: 589/1024 [MB] (589 MBps) [2024-12-06T10:30:18.811Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:31:12.644 00:31:12.644 10:30:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:12.644 10:30:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=0b01df2827a4dae53d1b7e9250dee130 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 0b01df2827a4dae53d1b7e9250dee130 != \0\b\0\1\d\f\2\8\2\7\a\4\d\a\e\5\3\d\1\b\7\e\9\2\5\0\d\e\e\1\3\0 ]] 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:14.555 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83727 ]] 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83727 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83727 ']' 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83727 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83727 00:31:14.815 killing process with pid 83727 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83727' 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83727 00:31:14.815 10:30:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83727 00:31:15.387 [2024-12-06 10:30:21.371095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:15.387 [2024-12-06 10:30:21.381721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.381757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:15.387 [2024-12-06 10:30:21.381768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:15.387 [2024-12-06 10:30:21.381774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.381792] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:15.387 [2024-12-06 10:30:21.383863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.383892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:15.387 [2024-12-06 10:30:21.383900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.060 ms 00:31:15.387 [2024-12-06 10:30:21.383907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.384082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.384090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:15.387 [2024-12-06 10:30:21.384097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:31:15.387 [2024-12-06 10:30:21.384102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.385204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.385229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:15.387 [2024-12-06 10:30:21.385236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.090 ms 00:31:15.387 [2024-12-06 10:30:21.385246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.386115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.386128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:15.387 [2024-12-06 10:30:21.386135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.846 ms 00:31:15.387 [2024-12-06 10:30:21.386141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.393849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.393877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:15.387 [2024-12-06 10:30:21.393888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.685 ms 00:31:15.387 [2024-12-06 10:30:21.393895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.397963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.397989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:15.387 [2024-12-06 10:30:21.397998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.029 ms 00:31:15.387 [2024-12-06 10:30:21.398005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.398064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.398073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:15.387 [2024-12-06 10:30:21.398080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:15.387 [2024-12-06 10:30:21.398089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.405324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.405352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:15.387 [2024-12-06 10:30:21.405359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.223 ms 00:31:15.387 [2024-12-06 10:30:21.405365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.412562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.412586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:15.387 [2024-12-06 10:30:21.412593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.172 ms 00:31:15.387 [2024-12-06 10:30:21.412599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.419560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.419584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:15.387 [2024-12-06 10:30:21.419591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.937 ms 00:31:15.387 [2024-12-06 10:30:21.419597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.426609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.387 [2024-12-06 10:30:21.426634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:15.387 [2024-12-06 10:30:21.426641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.969 ms 00:31:15.387 [2024-12-06 10:30:21.426646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.387 [2024-12-06 10:30:21.426669] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:15.387 [2024-12-06 10:30:21.426680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:15.387 [2024-12-06 10:30:21.426687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:15.387 [2024-12-06 10:30:21.426693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:15.387 [2024-12-06 10:30:21.426699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:15.387 [2024-12-06 10:30:21.426705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:15.387 [2024-12-06 10:30:21.426711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:15.387 [2024-12-06 10:30:21.426717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:15.388 [2024-12-06 10:30:21.426785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:15.388 [2024-12-06 10:30:21.426790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 471f812c-23f7-41c3-b211-e349b3c88310 00:31:15.388 [2024-12-06 10:30:21.426796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:15.388 [2024-12-06 10:30:21.426801] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:15.388 [2024-12-06 10:30:21.426807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:15.388 [2024-12-06 10:30:21.426813] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:15.388 [2024-12-06 10:30:21.426818] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:15.388 [2024-12-06 10:30:21.426825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:15.388 [2024-12-06 10:30:21.426834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:15.388 [2024-12-06 10:30:21.426840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:15.388 [2024-12-06 10:30:21.426846] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:15.388 [2024-12-06 10:30:21.426851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.388 [2024-12-06 10:30:21.426857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:15.388 [2024-12-06 10:30:21.426863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:31:15.388 [2024-12-06 10:30:21.426868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.436492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.388 [2024-12-06 10:30:21.436518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:15.388 [2024-12-06 10:30:21.436526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.604 ms 00:31:15.388 [2024-12-06 10:30:21.436534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.436801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.388 [2024-12-06 10:30:21.436814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:15.388 [2024-12-06 10:30:21.436821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:31:15.388 [2024-12-06 10:30:21.436827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.469786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.388 [2024-12-06 10:30:21.469813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:15.388 [2024-12-06 10:30:21.469821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.388 [2024-12-06 10:30:21.469831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.469851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.388 [2024-12-06 10:30:21.469857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:15.388 [2024-12-06 10:30:21.469863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.388 [2024-12-06 10:30:21.469869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.469914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.388 [2024-12-06 10:30:21.469922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:15.388 [2024-12-06 10:30:21.469928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.388 [2024-12-06 10:30:21.469935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.469950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.388 [2024-12-06 10:30:21.469956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:15.388 [2024-12-06 10:30:21.469965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.388 [2024-12-06 10:30:21.469971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.388 [2024-12-06 10:30:21.529780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.388 [2024-12-06 10:30:21.529812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:15.388 [2024-12-06 10:30:21.529820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.388 [2024-12-06 10:30:21.529830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.578675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.578706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:15.649 [2024-12-06 10:30:21.578714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.578721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.578765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.578773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:15.649 [2024-12-06 10:30:21.578779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.578785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.578829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.578843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:15.649 [2024-12-06 10:30:21.578851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.578856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.578923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.578931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:15.649 [2024-12-06 10:30:21.578937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.578942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.578965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.578973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:15.649 [2024-12-06 10:30:21.578979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.578985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.579011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.579018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:15.649 [2024-12-06 10:30:21.579025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.579031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.579063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.649 [2024-12-06 10:30:21.579072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:15.649 [2024-12-06 10:30:21.579078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.649 [2024-12-06 10:30:21.579084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.649 [2024-12-06 10:30:21.579172] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.427 ms, result 0 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:16.221 Remove shared memory files 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83483 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:16.221 00:31:16.221 real 1m20.785s 00:31:16.221 user 1m52.250s 00:31:16.221 sys 0m18.949s 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.221 ************************************ 00:31:16.221 END TEST ftl_upgrade_shutdown 00:31:16.221 ************************************ 00:31:16.221 10:30:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@14 -- # killprocess 75160 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 75160 ']' 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@958 -- # kill -0 75160 00:31:16.221 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75160) - No such process 00:31:16.221 Process with pid 75160 is not found 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75160 is not found' 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=83939 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:16.221 10:30:22 ftl -- ftl/ftl.sh@20 -- # waitforlisten 83939 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@835 -- # '[' -z 83939 ']' 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.221 10:30:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:16.221 [2024-12-06 10:30:22.356509] Starting SPDK v25.01-pre git sha1 500d76084 / DPDK 24.03.0 initialization... 00:31:16.221 [2024-12-06 10:30:22.356648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83939 ] 00:31:16.482 [2024-12-06 10:30:22.516053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.482 [2024-12-06 10:30:22.602331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.052 10:30:23 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:17.052 10:30:23 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:17.052 10:30:23 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:17.312 nvme0n1 00:31:17.313 10:30:23 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:17.313 10:30:23 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:17.313 10:30:23 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:17.573 10:30:23 ftl -- ftl/common.sh@28 -- # stores=6dee0fd4-4a14-43c5-b9cb-341deafd8f94 00:31:17.573 10:30:23 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:17.573 10:30:23 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dee0fd4-4a14-43c5-b9cb-341deafd8f94 00:31:17.832 10:30:23 ftl -- ftl/ftl.sh@23 -- # killprocess 83939 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 83939 ']' 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@958 -- # kill -0 83939 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@959 -- # uname 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83939 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:17.832 killing process with pid 83939 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83939' 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@973 -- # kill 83939 00:31:17.832 10:30:23 ftl -- common/autotest_common.sh@978 -- # wait 83939 00:31:19.214 10:30:25 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:19.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:19.214 Waiting for block devices as requested 00:31:19.214 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.473 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.473 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:19.474 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:24.751 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:24.752 Remove shared memory files 00:31:24.752 10:30:30 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:24.752 10:30:30 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:24.752 10:30:30 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:24.752 10:30:30 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:24.752 10:30:30 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:24.752 10:30:30 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:24.752 10:30:30 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:24.752 00:31:24.752 real 13m1.361s 00:31:24.752 user 15m14.821s 00:31:24.752 sys 1m17.825s 00:31:24.752 10:30:30 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.752 10:30:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:24.752 ************************************ 00:31:24.752 END TEST ftl 00:31:24.752 ************************************ 00:31:24.752 10:30:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:24.752 10:30:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:24.752 10:30:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:24.752 10:30:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:24.752 10:30:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:24.752 10:30:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:24.752 10:30:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:24.752 10:30:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:24.752 10:30:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:24.752 10:30:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:24.752 10:30:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.752 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:31:24.752 10:30:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:24.752 10:30:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:24.752 10:30:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:24.752 10:30:30 -- common/autotest_common.sh@10 -- # set +x 00:31:26.142 INFO: APP EXITING 00:31:26.142 INFO: killing all VMs 00:31:26.142 INFO: killing vhost app 00:31:26.142 INFO: EXIT DONE 00:31:26.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:26.976 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:26.976 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:26.976 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:26.976 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:27.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.809 Cleaning 00:31:27.809 Removing: /var/run/dpdk/spdk0/config 00:31:27.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:27.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:27.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:27.809 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:27.809 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:27.809 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:27.809 Removing: /var/run/dpdk/spdk0 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57045 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57258 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57476 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57574 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57619 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57747 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57765 00:31:27.809 Removing: /var/run/dpdk/spdk_pid57964 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58063 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58153 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58263 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58356 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58401 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58432 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58508 00:31:27.809 Removing: /var/run/dpdk/spdk_pid58614 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59045 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59102 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59155 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59171 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59273 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59284 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59386 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59396 00:31:27.809 Removing: /var/run/dpdk/spdk_pid59455 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59473 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59521 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59538 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59693 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59735 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59813 00:31:28.070 Removing: /var/run/dpdk/spdk_pid59990 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60069 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60111 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60556 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60659 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60768 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60821 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60847 00:31:28.070 Removing: /var/run/dpdk/spdk_pid60925 00:31:28.070 Removing: /var/run/dpdk/spdk_pid61550 00:31:28.070 Removing: /var/run/dpdk/spdk_pid61587 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62071 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62169 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62285 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62349 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62369 00:31:28.070 Removing: /var/run/dpdk/spdk_pid62400 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64240 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64377 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64381 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64394 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64437 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64441 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64453 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64503 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64507 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64519 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64564 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64568 00:31:28.070 Removing: /var/run/dpdk/spdk_pid64580 00:31:28.070 Removing: /var/run/dpdk/spdk_pid65966 00:31:28.071 Removing: /var/run/dpdk/spdk_pid66063 00:31:28.071 Removing: /var/run/dpdk/spdk_pid67462 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69200 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69274 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69344 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69452 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69545 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69641 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69715 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69790 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69894 00:31:28.071 Removing: /var/run/dpdk/spdk_pid69986 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70082 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70150 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70231 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70334 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70421 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70517 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70586 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70661 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70765 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70857 00:31:28.071 Removing: /var/run/dpdk/spdk_pid70947 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71021 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71095 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71166 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71241 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71344 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71435 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71530 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71604 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71678 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71747 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71821 00:31:28.071 Removing: /var/run/dpdk/spdk_pid71930 00:31:28.071 Removing: /var/run/dpdk/spdk_pid72015 00:31:28.071 Removing: /var/run/dpdk/spdk_pid72161 00:31:28.071 Removing: /var/run/dpdk/spdk_pid72447 00:31:28.071 Removing: /var/run/dpdk/spdk_pid72478 00:31:28.071 Removing: /var/run/dpdk/spdk_pid72928 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73109 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73206 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73310 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73363 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73383 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73694 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73752 00:31:28.071 Removing: /var/run/dpdk/spdk_pid73824 00:31:28.071 Removing: /var/run/dpdk/spdk_pid74216 00:31:28.071 Removing: /var/run/dpdk/spdk_pid74356 00:31:28.071 Removing: /var/run/dpdk/spdk_pid75160 00:31:28.071 Removing: /var/run/dpdk/spdk_pid75288 00:31:28.071 Removing: /var/run/dpdk/spdk_pid75464 00:31:28.071 Removing: /var/run/dpdk/spdk_pid75557 00:31:28.071 Removing: /var/run/dpdk/spdk_pid75854 00:31:28.071 Removing: /var/run/dpdk/spdk_pid76136 00:31:28.071 Removing: /var/run/dpdk/spdk_pid76495 00:31:28.332 Removing: /var/run/dpdk/spdk_pid76673 00:31:28.332 Removing: /var/run/dpdk/spdk_pid76803 00:31:28.332 Removing: /var/run/dpdk/spdk_pid76858 00:31:28.332 Removing: /var/run/dpdk/spdk_pid77023 00:31:28.332 Removing: /var/run/dpdk/spdk_pid77048 00:31:28.332 Removing: /var/run/dpdk/spdk_pid77101 00:31:28.332 Removing: /var/run/dpdk/spdk_pid77325 00:31:28.332 Removing: /var/run/dpdk/spdk_pid77543 00:31:28.332 Removing: /var/run/dpdk/spdk_pid78155 00:31:28.332 Removing: /var/run/dpdk/spdk_pid78889 00:31:28.332 Removing: /var/run/dpdk/spdk_pid79530 00:31:28.332 Removing: /var/run/dpdk/spdk_pid80386 00:31:28.332 Removing: /var/run/dpdk/spdk_pid80528 00:31:28.332 Removing: /var/run/dpdk/spdk_pid80604 00:31:28.332 Removing: /var/run/dpdk/spdk_pid81116 00:31:28.332 Removing: /var/run/dpdk/spdk_pid81174 00:31:28.332 Removing: /var/run/dpdk/spdk_pid81674 00:31:28.332 Removing: /var/run/dpdk/spdk_pid82193 00:31:28.332 Removing: /var/run/dpdk/spdk_pid82956 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83087 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83134 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83187 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83243 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83302 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83483 00:31:28.332 Removing: /var/run/dpdk/spdk_pid83563 00:31:28.333 Removing: /var/run/dpdk/spdk_pid83630 00:31:28.333 Removing: /var/run/dpdk/spdk_pid83727 00:31:28.333 Removing: /var/run/dpdk/spdk_pid83766 00:31:28.333 Removing: /var/run/dpdk/spdk_pid83834 00:31:28.333 Removing: /var/run/dpdk/spdk_pid83939 00:31:28.333 Clean 00:31:28.333 10:30:34 -- common/autotest_common.sh@1453 -- # return 0 00:31:28.333 10:30:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:28.333 10:30:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.333 10:30:34 -- common/autotest_common.sh@10 -- # set +x 00:31:28.333 10:30:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:28.333 10:30:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.333 10:30:34 -- common/autotest_common.sh@10 -- # set +x 00:31:28.333 10:30:34 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:28.333 10:30:34 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:28.333 10:30:34 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:28.333 10:30:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:28.333 10:30:34 -- spdk/autotest.sh@398 -- # hostname 00:31:28.333 10:30:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:28.594 geninfo: WARNING: invalid characters removed from testname! 00:31:55.182 10:30:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:57.098 10:31:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:59.010 10:31:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:01.576 10:31:07 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:02.961 10:31:08 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:04.877 10:31:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.485 10:31:13 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:07.485 10:31:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:07.485 10:31:13 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:07.485 10:31:13 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:07.485 10:31:13 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:07.485 10:31:13 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:07.485 + [[ -n 5033 ]] 00:32:07.485 + sudo kill 5033 00:32:07.495 [Pipeline] } 00:32:07.513 [Pipeline] // timeout 00:32:07.517 [Pipeline] } 00:32:07.533 [Pipeline] // stage 00:32:07.540 [Pipeline] } 00:32:07.555 [Pipeline] // catchError 00:32:07.563 [Pipeline] stage 00:32:07.565 [Pipeline] { (Stop VM) 00:32:07.577 [Pipeline] sh 00:32:07.858 + vagrant halt 00:32:10.394 ==> default: Halting domain... 00:32:16.994 [Pipeline] sh 00:32:17.278 + vagrant destroy -f 00:32:19.818 ==> default: Removing domain... 00:32:20.404 [Pipeline] sh 00:32:20.685 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:20.696 [Pipeline] } 00:32:20.710 [Pipeline] // stage 00:32:20.716 [Pipeline] } 00:32:20.730 [Pipeline] // dir 00:32:20.735 [Pipeline] } 00:32:20.751 [Pipeline] // wrap 00:32:20.758 [Pipeline] } 00:32:20.771 [Pipeline] // catchError 00:32:20.781 [Pipeline] stage 00:32:20.784 [Pipeline] { (Epilogue) 00:32:20.798 [Pipeline] sh 00:32:21.082 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:26.377 [Pipeline] catchError 00:32:26.386 [Pipeline] { 00:32:26.400 [Pipeline] sh 00:32:26.687 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:26.688 Artifacts sizes are good 00:32:26.699 [Pipeline] } 00:32:26.714 [Pipeline] // catchError 00:32:26.727 [Pipeline] archiveArtifacts 00:32:26.735 Archiving artifacts 00:32:26.845 [Pipeline] cleanWs 00:32:26.858 [WS-CLEANUP] Deleting project workspace... 00:32:26.858 [WS-CLEANUP] Deferred wipeout is used... 00:32:26.865 [WS-CLEANUP] done 00:32:26.867 [Pipeline] } 00:32:26.882 [Pipeline] // stage 00:32:26.888 [Pipeline] } 00:32:26.901 [Pipeline] // node 00:32:26.906 [Pipeline] End of Pipeline 00:32:26.941 Finished: SUCCESS